Conference report: The Lead Developer NYC 2018
The Lead Developer is a one day, single track conference for tech leads, senior engineers and engineering managers. I went to Lead Dev New York last week and really enjoyed it. It felt well run, warm and respectful (even more so because it's one of the few tech conferences that pays its speakers.)
The mix of technical and leadership topics worked well and the hallway track was enjoyable. Also, I loved that it was held in a theater: I got to sit in the front row of the balcony for the whole morning and had a fantastic view for blogging and taking pictures. The lighting was lovely and the seats were comfortable. It sure beats a hotel conference center!
I also liked that it felt pretty diverse (ok, diverse for a tech conference :-) ). There was a queue for the ladies room, which was unusual enough that people kept commenting on it. What an odd thing for us to be happy about, but there it is. (It occurs that conferences could replicate this phenomenon by artificially reducing the number of available bathrooms, but please don't do that.)
The conference included a mix of ten, twenty and thirty minute talks, which is reflected in the wildly different summary lengths below.
Revitalizing a cross-functional product organization. Deepa Subramaniam and Lara Hogan, Wherewithall.
Deepa and Lara met at Kickstarter, then became consultants who teach product and engineering organisations about team health, execution and shipping. They shared five ways to improve a product organisation.
1. Clarify roles. Responsibilities overlap between the product manager, eng manager, engineering and techlead. Unclear responsibilities lead to inertia or chaos. Documenting and agreeing on the roles gives higher velocity and greater accountability.
2. Create living documents and processes. Codify processes to reduce strife, assumptions and duplicated work. Update them as the situation changes. Document technical plans, do architecture reviews, write test and deploy plans. Have a product development process with milestones.
3. Lead difficult conversations. Lots of us avoid difficult conversations -- they're scary and we don't have great models for them -- but they're often what's needed to unblock a team. Practice and role-play them in advance. If needed, find a facilitator (not HR!) who doesn't have a stake in the outcome. Keep conversations safer with ground rules like assuming everyone is doing their best.
4. Demonstrate mindful communications. (Shout out to Etsy's Charter of Mindful Communication.) Reflect on the dynamics on in the room. Who's here; how will they receive this information? What's at stake for them. Keep the conversation constructive and productive: honesty is not constructive if it's cruel, and anonymous feedback can quickly become trolling. Practice empathy and assume good intentions. Remember that you don't have all the context on the person you're talking to. Listen to learn and be open to having your mind changed. When you request something, ask yourself "is this person in a position to take the action I'm suggesting"
5. Join forces. Avoid siloing with cross-org meetings jointly led by the leaders of both teams. Answer questions together publicly. Co-write cross-org emails. Demonstrate unity by rolling out the information to everyone at once. Recognise milestones for both teams and celebrate together. Model the united and collaborative behaviour you want to see. If you coordinate, you'll immediately see higher levels of trust and synchronicity.
Follow: @iamdeepa and @lara_hogan.
Collaborative debugging, Jackie Luo, Square
Debugging is a human issue as much as a technical one. Here's a tale of two bugs from Nylas, where Jackie used to work.
The first bug was assigned to an engineer who didn't get to it for two weeks. She debugged for a couple of days but didn't solve it. Then the bug passed to another engineer who was effectively starting from scratch. They made some missteps here. The logs were only retained for two weeks, so by the time they were looking at the problem, they didn't have all of the information. They also duplicated a lot of work in both investigations. A third engineer investigated a theory that turned out not to be the problem, but didn't add any notes to the bug.
We sometimes work on the assumption that one person should claim a bug and fix it. But debugging often takes context from more than one person. Or someone debugging alone for days can be frustrated and want to hand off. We need to leave breadcrumbs for the next person, including what we tried that didn't work, so the next person doesn't try it again.
Compare this to the second case study. An engineer got to the bug inside the logs window, and also asked the user for clear information about how to reproduce it. She documented her process, including things that weren't the problem: this prevented people going down the same dead end. After a few days, she had to leave on vacation, but it was fine to hand off to the next engineer, who was able to solve it. The end result is a happier customer.
Not every problem is a technical problem. Debugging is often a process and a people problem.
Traps on the Path to Microservices, George Woskob, Thoughtworks.
George showed before and after pictures of a monolith to microservices migration which cracked me up.
Microservices promised the world but the reality was disappointing. "We'd ended up with a distributed monolith". George outlined three traps of moving to microservices: underestimating their cost, over-centralisation and neglecting the monolith.
Costs: Debugging microservices is very hard, especially if you're not aggregating logs. (Don't do microservices without log aggregation). The architecture gets more complicated: things that were simple function calls now need data sent over the wire, with retries, circuit breaking and handling new kinds of errors. You suddenly have many things to build and deploy, more configuration to keep track of. Developers need to run a bunch of local services and datastores. Inter-team communication gets complicated.
Don't dive in head first: you should be deriving some benefit from every new microservice. Start a proof of concept in a part of the code where you want faster feature development or more scalability. Then reevaluate whether the benefits outweigh the costs.
Overcentralisation. Too much shared code enforces a tight coupling. Don't implement business logic in shared libraries. Also consider organisational centralisation. Avoid organisational tollgates, like a QA team who insist on a monthly release cycle. The move to microservices has to cause a change in your org culture. Embed QA and ops in your team. No walls.
Neglecting the monolith. You're deluding yourself if you say "The monolith will be gone soon". Don't let people add technical debt to the monolith. The core of your business functionality is still there, so show the monolith some love. Improve the modularity of its code. Strive for loose coupling. Look into the principals of domain driven design.
Your microservice should own its own data, so drawing the domain boundaries correctly now makes it easier to move to microservices later.
Don't let anyone claim your monolith is impossible to test. You need to test your business logic so you can be sure it still works when you move it out.
Use the strangler pattern: gradually wrap the monolith code with calls to microservices, until nothing remains of the monolith but the surrounding 'vines'.
Don't put so much process in place that you remove autonomy. Make sure your teams own the whole process. And you don't need to wait for microservices to make your architecture better. Follow the campsite rule: leave it better than you found it.
The Critical Career Path Conversation, John Riviello, Comcast
John found himself contemplating a decision that is familiar to many senior engineers: "Should I keep doing this thing I'm awesome at, or should I move into management and make other people awesome". He told the story of how he moved from IC tech lead to manager for an unlikely reason: only managers at Comcast got offices.
Some myths of management:
Myth: Management is the same job as tech leading. No, there's more to it than just approving vacation time now and again. He knew his coding time would decrease but assumed he'd code at nights and weekend instead. (Laughter from the audience.)
Myth: Management is less work. Delegating sounds easy but it's not. It might seem like managers aren't doing much, but it's a different type of work. Shoutout to Lara Hogan's post about Manager Energy Drain. (ed: I recommend this post to someone around once a week).
Myth: it's a promotion. Well, it's presented that way, but you're actually moving into an entry level position. It tales time to get back to performing at a senior level.
Myth: verbal contracts are binding. Any deal you make with your manager ("you'll still get to code") only lasts until your manager's manager disagrees, or your manager leaves the company.
Myth: you will always have an office. (Hahaha, awwwwww). They moved to another building and... there weren't any offices.
The shift from IC to manager is hard. Humans like things we're able to do and it's frustrating to learn a completely new set of skills. Your focus changes to the people. You don't get the quick wins throughout the day that you do as a coder. Wins are fewer (but bigger) and it's harder to feel fulfilled.
In two years as a manager, John learned that he could handle it but it wasn't what he wanted to do. He missed coding and stopped feeling excited to go to work.
But that doesn't mean management is not for anyone. Our industry needs more managers. Google tried an experiment with no managers and it didn't work. A Gallup study showed that only 10% of people had management skills like ability to motivate, driving outcomes, building relationships and making decisions based on productivity not politics.
The gallup study showed that choosing the right manager is important, but that companies failed to choose the right candidate 82% of the time.
John did a survey on why people became managers. He heard a mix of passive and active reasons. For some, the opportunity just arose and they drifted in. Others chose what they saw as the higher impact path. Some companies had no IC track so management was the only path to a higher salary.
Managers told him that they found it rewarding to combine technical and people skills, and that building teams has the same feeling of creating something as building code. They felt rewarded by hearing reports say the advice they gave was helpful. They needed to adjust to being measured by the team's success, not their own, and to feeling senior and embracing being agents of change. And another encouragement here to have hard conversations, even if they're weird. Get coaching and learn to do them well.
It takes a lot of time and effort to be a good manager, but it's worth the effort. It's also ok to change your mind and come back. It's easier if you time it with a reorg.
If you're deciding which to do, ask other people for their experiences and consider what excites you. If you have free time, do you want to write code or study management? (ed: I always ask "which do you want to get better at?"). Immerse yourself in being a manager before being a manager. Read blog posts, watch talks, read books. Be a mentor. That'll show you what it's like to be a manager in a safe environment.
Universal Apps: Architecture for the Modern Web, Elyse Kolker Gordon, Vevo.
Web apps come in two main flavours:
Traditional apps download the initial .html page and parse it, then go back to the server for any other files the page refers to. If you click on a link or otherwise interact with the page, the browser makes a new request to the server to pull down a new page, even if what the user sees is only going to change by a word or two.
Isomorphic apps are good because:
1) It's better for SEO than single page apps. Google can crawl your content because it's getting a full webpage from the server.
2) It feels faster. Users see content immediately, and then get all of the speed benefits of single page apps.
The goal is to write code once, but run in two environments. Ember, React, Vue.js, and other frameworks all support this. If you just want to try it out, Next.js (react) and Nuxt.js (vuejs) are out of the box solutions.
Isomorphic web apps also have some difficulties. Reusing the code in competing environments means keeping track of which capabilities are available in both. The server has no window object. Testing can be difficult, since the code will run in two states.
Universal architecture adds complexity but gives you clear benefits for SEO and performance.
Also, obviously any errors in this write up are mine and not Elyse's! Let me know if I got anything wrong and I'll fix it.)
Book: Isomorphic Web Applications
Improving Reliability with Error Budgets and Site Reliability Engineering, Liz Fong-Jones, Google.
Liz works on customer reliability engineering at el Goog. Major concerns in reliability are not being sure that things will run the same way in production as in a development environment, and scaling without hiring more people.
Developing code is between 10% and 60% of the cost of software, but traditional dev and ops incentives aren't aligned. Developers are focused on shipping; operators are running a LAMP stack and saying no to changes. Just as Agile broke down the wall between dev and business, DevOps broke down the wall between dev and ops.
SRE is an implementation of DevOps. It's a job function, a mindset and a set of engineering approaches. It treats operations not as a separate discipline from software engineering, but as the same thing. It means designing systems to be reliable from the start.
SRE's main principal is error budgets: having a certain amount of downtime we're willing to tolerate. These should be a product concern. Product managers define what users want and what's reliable enough? We measure how we're doing against that standard, and then we know how much we can ship features or do experiments.
Service Level Indicators are the raw metrics that tell us how the system's doing. Service Level Objectives are our target for what fraction of interactions should be good. A Service Level Agreement defines the consequences of missing the SLO.
Concrete day to day practices for operations include
- metrics and monitoring. Avoid noisy alerts and only involve humans when the SLO is threatened.
- capacity planning both for organic and inorganic growth.
- change management. Google data showed that 70% of outages were due to a new binary or new config. Mitigate that with progressive rollouts, fast detection, and rolling back changes without a human in the loop. Automate to avoid human error. Spend the error budget to increase development velocity. If we have leftover error budget, we can push more often, try more experiments.
- emergency response. Don't panic. Pull in help if you need it, Mitigate, then troubleshoot. Define your incident and postmortem criteria and write them down before the outage. Figure out how to prevent it from happening again. Postmortems should be blameless. Any kind of failure where it's possible for a human to do the wrong thing is a system where there's a missing safeguard.
SREs are a mix of software and systems engineers. They include people with great computer science, and people who know the real world, but everyone should be able to code.
To get started with an SRE organisation, create SLOs. Hire people who write software and ensure parity of respect with the rest of the developer org. Allow SREs to choose their work. Empower them to enforce the error budget and toil budget. Don't burn out your SREs: they're valuable and scarce so deploy them thoughtfully.
SRE came from Google but Liz's team has worked with dozens of companies to teach them to implement SRE at any scale. Start with one service. Empower that team with strong executive sponsorship and support. Incremental progress is fine: if you go from 90% to 80% ops load, that's still 10% freed up. When you have a solid case study, share it with other teams.
Fear of the computer, Maggie Zhou, Slack.
Modifying software is necessary to make progress, but even experienced engineers can be scared of changing complex distributed systems. Monitoring systems, caches, CDNs, etc, mean a lot of moving parts. We can have surprising failure modes, like when provisioning new machines caused an apache upgrade across the fleet or sorting an array caused a site-wide outage.
You need a toolkit to ensure that your team can confidently make changes while keeping the product available. Metrics and feature flags can help.
Etsy had a scary database that was used by everything and had been treated as a unknowable black box. They instrumented the code to understand the baseline, then added feature flags.
Feature flags are typically used in products for gating user-visible features, A/B testing and merging code that isn't ready, but they're also good for incremental ramp-ups of new codepaths in infrastructure. Knowing where the off switch is gives us confidence: if there's a performance regression, we can quickly ramp down again. Being able to watch the metrics and undo if necessary makes it safer to make changes.
Mobile and desktop applications especially need server-side feature flags, because if we deploy a bug, pushing out a fix can take days or weeks.
A second way to build confidence is to really understand the change. Not by staring at the code, but by dogfooding and having good analytics.
Slack dogfooded a new database web driver by pushing to internal users first. There was a performance regression which could easily have been lost in the noise, but they'd segmented their metrics so that the new and old usage were distinct, and the problem stood out.
Quality metrics are important. Bugs in measurement software leads to bad decisions. Make sure to measure the same thing from different perspectives.
Further reading: https://gist.github.com/zmagg/3f363d027d3f660bae1bed04b1588ad1
The New Manager Death Spiral. Michael Lopp, Slack
Michael shared a cautionary tale full of hard learned advice. He emphasised that this is a worst case scenario: it's many stories synthesized together.
It starts with a conversation where you discover you're a manager. 60% of managers get no training. But you tell yourself that you can do it. "I'm the boss!". And then you make a bad decision: you think you can do everything yourself and you sign up for too many things. As an IC, you're used to visibility and ownership of the things you build. Delegating is an unfamiliar loss of power. You've signed up for too many things and it's unfamiliar work. You can tell you did a sub-par job. And you discover your job is no longer to get things done, it's to get things done at scale. You should now ask for help, but most people don't.
You delegate badly, handing off things that don't matter, without giving full context or ownership. Your reports start to fail because they don't have authority or they're missing context. They ask for help. You think, but don't say, that they if you'd run the project the problem wouldn't have happened. But you double down on control, give them the barest of advice and imply that their job is at risk if they don't figure it out. This is the point of the spiral where they stop trusting you, stop talking to you and start talking to each other.
As a leader your job is to aggressively delegate. It's hard when it's a project you'd do great at, and someone else will merely do ok. But they get to learn and you get to coach them from a B to an A. You demonstrate trust by giving them work that is scary to them. But if you're on the manager death spiral you don't do that, and they fail.
When humans have an absence of information, they pour in their worst fears. Opinions are treated as facts. Your credibility is damaged and eventually you hear what the team is saying about you. "And you think 'This is not me.'" And that's the bottom of the spiral.
Management is not a promotion. A promotion is a reward for doing things you know how to do in your current job. If you're going into management for the first time, you're starting over. Management is a career restart. And 60% of new managers get no training.
Let others change your mind. Learn how to understand different perspectives. You have something to learn. Different humans are going to make your project better. Ideas don't get better if everyone agrees. You'll debate and have tense conversations and come out the other side having learned something new.
Delegate more than is comfortable.
Start small. For each act that you're taking, ask whether you're building or eroding trust.
Better Incident Management to Reduce MTTR, Beth Adele Long, New Relic.
Incidents are high stakes, high cadence group activities, where you need someone to coordinate the response. The incident commander is one person (not a committee!) who coordinates the response. They're not in charge of solving the technical problem; they solve the human systems problem. They're a therapist, a translator and a coach.
The incident commander needs to understand the incident response process: what are the severity definitions, where do we communicate updates, what's the incident lifecycle. They need to understand the priorities, e.g., is it more important to save the data or to maintain the user experience. But they don't need to be an engineer. These are human systems.
The incident commander regulates the "flow" of the incident:
- the flow of emotion. They're aware of fight/flight/freeze responses and are ready to cool things down, slow people down, and unwedge stuck people.
- the flow of information. They understand who needs to know what and make sure that customer support, execs, legal, etc, each get the view on the incident that they need.
- the flow of analysis. They ask questions to help sync up everyone's mental models. They ask "how do you know that's true?" and get people to articulate their process to themselves and others. They amplify voices that aren't being heard and make sure critical information isn't missed.
Bowerbirds of Technology: Architecture and Teams at Less-than-Google Scale, Sam Kitajima-Kimbrel, Nuna
Most websites aren't in the top 100 web sites. Big companies like Google and Facebook have large amounts of data, thousands of specialised engineers and near-unlimited resources. They make different choices than small companies do.
Jeff Bezos mandated that all Amazon services had to communicate with each other via service interface calls over the network, and all service interfaces must be designed from the ground up to be useable for external customers. All of these services took a bunch of developer time to deal with.
Adopting the bleeding edge means leaving some users behind. Even Google didn't start out at Google scale. Exponential growth feels slow at first. Care about user trust above all else. Aim for fast, safe iteration.
Your developer team's time is finite. Be like bowerbirds, who build their nests from found materials: we don't have to reinvent the wheel. Use open source, vendors, off the shelf software, and combine what you find. Use mature, maintained software. Care about security, stability, widespread use and documentation. Think about whether you can get a support contract. Understand the costs, including opportunity cost, of building verses buying.
Take care of your humans. Properly staff on call rotations and make sure on call teams have the authority to prevent repeat breakages. Be inclusive, not just diverse. Have ground rules and a team charter for things like how fast code reviews are turned around, meeting etiquette, feigned surprise. Retain and promote. Don't make minorities do diversity work as an unpaid second job; hire professionals.
Know the impact of your outage on your users. Overcommunicate and talk to them. Update your status page if you even think there might be an outage. Think about your failure domains. Practice failovers. Understand your security threat models.
Everything You Need to Know About OpenAPI 3.0 (formerly Swagger) in Ten Minutes or Less, Erin McKean, IBM and Wordnik
OpenAPI is a standard language-agnostic interface to REST APIs which allows understanding the capabilities of a service without access to its code or documentation. It used to be called Swagger and was created by Wordnik, who open sourced it.
It generates documentation and code and makes it easy for developers to see the headers and schema for your API. You can think of it as making an API easier to use, like attaching a handle to a bucket. Generated code might not be the most beautiful, but it gets people started and some people won't be able to resist writing better code to improve it.
OpenAPI enthusiasts will meet in September at the APIStrat conference in Nashville.
(ed: Erin is a really funny presenter and I hope to see her at a longer talk some time.)
The Continuous Culture, Kim van Wilgen, ANVA
Small projects are more successful than large projects (74% vs 10% clear success). Report says that large projects have 64% unused features, vs 14% in small projects. Small projects notice earlier that the code won't be needed and change direction. The cheapest code is code that isn't there.
The Cynefin Model describes cause and effect to indicate complexity. In the "simple" quadrant, cause and effect are directly connected. We often think software development lies in the "complicated" quadrant -- cause leads to effect, but you need to be an expert to see how -- but it's really in the complex quadrant: there's a relationship between cause and effect but we have to probe to understand it. Our assumptions are wrong.
High performing teams change faster. Eight lessons for incremental change:
1. Safely and sustainably release software. Release frequently, but not more frequently than is useful.
2. You're only as small as your MVP and as agile as your roadmap. The code is only useful if you're using it. Rather than year-long planning, be agile and change direction based on new knowledge. Don't spend time talking about code you're not going to write.
3. Get manual work, like auditing and compliance, out of the critical path. Use version control, automated testing, etc.
4. Aim for autonomy. Managers, directors, HR don't scale; they need to set the vision and let go. Be transparent.
5. Continuous feedback, including praise. Continuous learning.
6. Reduce risks. Make it so that putting wrong code into production isn't terrible. Use the test pyramid: many very fast tests early in the development process, fewer slow tests at the end.
7. Minimise branching. Try collaboration, dark launches and feature flags. Only if none of those work, do branching.
8. Customers don't want continuous delivery. They want stability. Maintain their trust.
Be ready for surprises along the way.
Amazing internships! Silvia Esparrachiari, Google
Tech internships are clearly good for for interns: they get practical experience and are exposed to different roles and professionals from different backgrounds. Hopefully they get to deliver something useful and get paid for it.
But internships are good for the host too. You have an opportunity for leadership, fresh ideas coming to your team, and an opportunity to plan out a project that will last a finite time period.
Being a good intern host means being respectful. Know your interns name. (Laughter from the audience.). Don't call them "the intern" or "the kid". You're a role model now, so be aware of what you're doing; you may see your interns explaining things the way you would, or mirroring your body language.
A good intern experience needs a solid plan and planning starts with the choice of project. Make it something meaningful and interesting, not "fix the bug I'm avoiding". Make it something you know how to do. Ideally, make it something they can deliver to someone outside your team. Delivering to a client is what we do all the time and it's a different experience to just making something for your team.
Your project should have adjustable scope. Build in buffer weeks in case there are obstacles and have a stretch goal. Have a final challenge.
For a typical 13 week internship, schedule 10 weeks for the main project. Give your interns time to learn. Be beside them for times when they're discouraged. Provide the focus and keep them on track.
Make sure that the plan includes time to launch the thing and get used feedback. Keep the last week free for closing loose ends.
Give your interns a test environment, so it's safe to try things out. Make sure your documentation is up to date, and have a backup host who will answer questions when you're not available.
Give your intern a project that you would like to do.
Make the Right Thing the Easy Thing: How to Design Systems and Processes Teams Actually Follow. Jason Lengstorf, IBM.
The role of a manager is to make sure people feel cared about and safe. The role of a lead developer is to keep the project going. When something breaks, we can fix the process or fix the problem. Typically we default to fixing the immediate problem: it feels great to be the team rock star! It's less fun to push the process and be the team auditor. But rockstars don't get days off.
We talk about the bus factor, but better to think of it as vacation tolerance. Rockstars are bottlenecks who create dependent teams. Even people who could solve the problem themselves know that the rockstar will come "well actually" the solution later, so they don't try. Eventually the rock star leaves and nobody knows how to do anything without them.
Teams are slowed down by lack of confidence in ability, and lack of clarity about goals. A good process makes it easier to make decisions because the team can act without wondering if they have authority or need approval.
A good process needs excellent onboarding and documentation, ongoing education and training. Make it easy for people to know if they're following the process, e.g., with code reviews and style guides. Processes can fail when they're written by people who are good at understanding systems but not good at understanding people. You can be "correct" but people may resist and hate it.
Use the model of the rider and the elephant. The rider is the logical brain which uses willpower; the elephant is the subconscious brain that does the thing that feels right. The rider has limited control over the elephant. If we're logically correct, we can appeal to the rider, but we need to appeal to the elephant by making the right thing the easy thing.
Four key things to focus on:
Give people emotional rewards, by making their lives easier or giving them positive feedback. It doesn't matter if you're logically correct; your solution has to be approachable. Think of all of the abandoned open source projects that do the right thing but didn't do a good job of onboarding.
Use automation, like CI/CD, lint, code coverage checks. Automatically format code instead of making a human follow the style. Use robots as code reviewers for things that humans might choose to flag or let go: the robot doesn't have unconscious bias or good days and bad days.
Keep it simple. Consider the cost of onboarding and training. Use stable open-source tools where possible. Write code that's small and easy to delete. Build for now, not 5 years from now. Keep it as simple as possible for as long as possible.
Avoid yak shaving. This is where you want to work on a problem but you have to do something else first, but that's blocked by something else; repeat until you're 8 tasks deep. Provide easy development environments and don't make people have to do a bunch of work before they can start working.
Set people up to succeed: don't apply process to things that are already a mess. It's easy to keep something great, but hard to take something bad and make it good.
A lead developer isn't a rockstar, they're someone who creates guard rails, defines processes, removes bottlenecks and makes the team stronger. This gives us high vacation tolerance and safe teams.
(ed: Jason's slides are the most beautiful slides.)
Need For Tests: Most Wanted. Rushaine McBean, Kickstarter.
Testing isn't easy, and it takes time, a limited resource. We need to be sure it's worth it. Our main motivation is courage: tests let us trust the code and therefore save us time down the road. It's a longer term gain.
The kinds of tests we need depends on the context. Selenium is valuable, but not enough. We want fast tests early and slower tests later. (ed: The test pyramid again! That's a major thing I'm taking away from this conference.) Unit tests give quick feedback.
Writing tests before creating code makes the code more testable. Good tests shape the code and make it modular. And it gets easier the more you do it.
If introducing a testing culture, start small. Find a bug that could have been caught by tests and show your organisation the value of testing to catch it. Test for both positive and negative outcomes. If you find yourself copying and pasting a lot, that's a sign that the tests are too big. Tests should fail uniquely; if this failure is already covered, maybe you don't need a new one.
The Best Hearts of My Generation: Intersectional & Inclusive Standards for Developers, Patricia Realini.
Being an underrepresented minority is tiring. It costs mental energy to continuously assimilate. The Allen Ginsberg poem "Howl" talks about the destruction of "the best minds of my generation". Patricia sees inclusion work as a similar howling into the wind. We discount the words of oppressed people who show visible emotion. Listen to the people who are frustrated and in pain, not just the spokespeople who seem mature and responsible.
As leads, we have power to bring positive outcomes. Look at the opposite of privilege: stereotype threat, stigma pressure, cognitive dissonance as you try to minimise the extent to which people perceive you as a minority.
Minimising other people's experiences, or not believing them, adds to the trauma. It normalises toxic habits. It leads to gaslighting.
The work of underrepresented people is often invisible. We celebrate César Chávez but not Dolores Huerta. We forget that the first programmers were women, but were pushed out when the work began to be seen as valuable. The industry is still hostile.
Minorities have to work much harder to be successful. Social loafing -- expending social capital to not do as much work -- isn't an option; with less social capital, underrepresented folks pick up the slack. A supportive workplace can make this easier by delegating explicitly to individuals.
Think about communication style. Ask how everyone communicates, how they recharge. Level the playing field. If one person has reduced mobility and can't do walk-ups, make it so everyone talks on slack and nobody does walk-ups. Assume everyone is the expert on their own experience.
Vague feedback correlates with lower performance reviews. Give specific, measurable feedback. Avoid broad generalisations. If giving advice, consider that what works for you may not work for someone else.
Get context before correcting someone. Give positive as well as negative feedback.
Underrepresented minorities may fear retribution and are trained to say yes, even when they don't want to. Respect it when someone says no to you. Be aware of asking vs guessing culture.
Use ring theory for deciding when to complain and when to comfort.
We all mess up and that's ok. Own your mistakes and learn to apologise. Make space to repair the mistake and listen to the person you wronged. Intentional inclusiveness is a cyclical process: as you fix things, you'll learn and discover new things to fix. Ask underrepresented minorities what the biggest problems are and start there.
The repeated theme I heard at this conference was "management is not a promotion". I think is a message our industry needs to hear until every company has a parallel IC track. Leadership is not the same as management, and I like that the Lead Dev is pushing this model of senior engineers as technical leaders.