The Future Leader's Guide to AI Adaptation
It's easy to think it's all doom and gloom as you look around at the evolving AI landscape, but this is just the time to level up. I'm optimistic about what this future means for leaders.
There’s no denying that some companies believe AI is going to allow them to reduce their workforce. I don’t believe that’s the long-term play for enduring companies, but with a tough macro-economic environment it is a possible strategy to survive that companies will experiment with. Those of us leading large teams are asking ourselves: “How much longer will there be large teams to lead?” Regardless of the outcome - I think it pays to prepare for multiple futures. In the event that your job is impacted, you’ll be better positioned to adapt with these skills.
I’m super optimistic about the future. I think most of these skills we develop managing humans put us in a good position for this coming revolution in how companies build products, lead teams, and steer armies of automatons.
You probably have a lot of these skills too. Are you good at communicating direction? Good at breaking work down and focusing on iterative steps toward some outcome? Good at setting measurable objectives? Can you provide examples of what you want to see built? Then, IMO, you are not only well equipped to leverage this next phase of how software is built, but you are in a position to help others too.
Opportunity is knocking
Instead of getting worried we’re not going to have a job in a year, why don’t we start talking about how to adapt? Each of these topics could be their own post, but I want to lay out some reasons you should see opportunity in all of this. Here are 5 reasons for you to be excited about leveraging this new era.
The robots will teach you
I cannot think of another point in my lifetime where the technology I needed to learn was itself the teacher. There aren’t many sources of information about how to interact with AI tools that are better than the AI tools themselves. Want to create a great prompt? Ask Claude to help you write a prompt! Want to learn how LLM fine-tuning works? Ask ChatGPT to develop a course for you along with hands-on examples. The sky is the limit here - these things are capable of making you an expert faster than any technology before them - all for $20/mo.
The learning that we all need to practice is how to best interact with these tools to get what you need. Much like humans, when you communicate your needs poorly then you get poor results. However, if you approach this as a collaborative exercise and ask the tools to help you craft a set of instructions, the results improve considerably.
Managing uncertainty is a skill
If you’ve managed teams of people, you’ve probably managed uncertainty. You may not have loved the experience and you may have felt like you could have done better - we all wish for a crystal ball to tell us what the right moves are - but my bet is you did alright. I think now is the time to lean into that experience and think about what helped your teams make decisions.
My early technology career was all about learning as fast as I could. This period really brought out my tendency for hyper-focus and obsessive attention to projects. This was a huge tool for me in handling uncertainty - go build something… anything. Whether that was installing Linux over and over, or building my own network labs, or learning to write code, each of these obsessions was focused on a learning objective. As I moved from one learning goal to the next, the value compounded.
As I shifted into leading people it was about challenging myself and my team in interesting ways. We could solve the problem a traditional way, but what’s the interesting way? How can we shape this problem into something we want to solve, something that teaches us? This same opportunity exists for you today when using these tools. How do you shift from using these tools the same way everyone else does - to using them in an interesting way?
Be a Player to Coach
Today I feel this need to obsess again. The challenge is to learn what skills I can bring to this new world where AI is probably going to be writing most of our code, and our engineering organizations are going to look different than they do today. I don’t have an opportunity to step into the future and read the playbook for these teams, but some people are learning to experiment today. How can we experiment and begin to challenge our own way of working? To do that I think I need to have first-hand experience doing the building. I’ve gone back to spending significant time learning how to use these tools to build software.
Am I building enterprise-grade software? No, I am not. I’m building fun things, and some useful things, some hard things and some easy things. I’m experimenting with how far I can push this technology and what it’s useful for. Along the way though, ideas I wouldn’t have thought of another way are surfacing for me.
Maybe I’ll build something with enduring value, but that isn’t necessary for my objective to be met. I’ll understand much more about what these tools can and cannot do, and I’ll be able to help others learn to leverage them.
Leverage a level playing field
Everyone has been astonished at the pace of change in these areas, and a lot of people haven’t gotten the memo that they need to jump in with both feet. The sooner you get started, the further ahead you’ll be. Yes, there are a handful of companies doing really neat stuff and you don’t work at one of those companies - This. Does. Not. Matter.
It might cost you a little money to sign up for some of the tools - think of it as investing in training. Think of this as paying for a book a month - if you really lean into these things, you’ll get more value than any single book has ever provided you in a given month.
I would strongly suggest investing in both a coding tool like Cursor or Windsurf AND an agent like ChatGPT or Claude. I personally use all of these tools. The technology is evolving weekly, and there’s no way to pick the one that’s best right now. One week it’ll be one tool, the next week it’ll be another, and having access to all of them allows you to observe the step changes that happen as new models and capabilities are released.
Feel the rough spots
In the last few months I’ve built software in Python, TypeScript, Rust, and Swift. I’ve built a terminal based RAG that taught me how vector databases and evals work. I built a web prototype of a gear game and an IOS todo list app. These tools allow you to move around in the software stack more easily than ever before. Not sure how to start? Go collaborate with Claude for a bit on your idea, ask it how to structure the project, ask it what technology to use, ask it what capabilities you should build first, second, third. Ask it to develop a plan that a coding assistant can use and put that plan into a markdown file. Hand that file to Cursor, and watch your app come to life.
…and then spend hours going in circles trying to figure out how to make the smallest things work right. This is what the tech is like right now and this is what your teams will be grappling with when you ask them to use these tools. Some of it is awesome and some of it can be really frustrating. How are you going to understand this new reality if you aren’t living it?
With all of that opportunity to learn, I also think there are rough waters ahead for many organizations. But here’s the thing - these aren’t going to be problems that are readily solved with fewer engineers or fewer managers.
What might need re-thinking
While I’m optimistic that TPS reports will be a thing of the past, and that finding a time-slot for 5 people to hop on a call will become easier, there are some things that will just be different. Organizations will face new challenges that you will be uniquely positioned to help solve. Here are a few predictions.
Reviewing change from the firehose
If you’ve used any of these AI code generation tools then you’ve seen the volume of code that can be produced. If we are successful at making our engineers 50x or 100x more productive, how are we going to review all the output? Humans are already a review bottleneck in many organizations. Strong automated testing has already been a requirement for orgs, but now we’re going to have these tools generating tests, so how do we evaluate successful outcomes?
Teams are going to have to develop tools and muscles to constantly evaluate the output of these tools, the products they build, and the ways that customers interact with them. Ideally AI can be leveraged to process a lot of that information, but someone has to build all of that.
Decision-making becomes the bottleneck
Do your teams get stuck in decision paralysis now? They will need to learn to process larger amounts of data and evidence to support faster decision making in order for products to evolve quickly. Building experiments may become trivial, but how will you leverage those experiments to learn and inform high quality decisions? How will your organization push decisions out to the edge, where product is being built, to enable high-velocity development to occur?
Not only will decision-making need to be distributed, but fast decision making will require fast access to data and strong analytics tools. Organizations that have relied on human intuition or high consensus behaviors will probably struggle to adopt a higher velocity development process.
Measuring individual performance
How are you going to evaluate the performance of engineers who are leveraging AI tools? What are the critical skills that they will need to have, and what experience is going to be crucial? Does 15 years of industry experience mean the same thing 3 years into this AI revolution? I suspect this will have to shift even more toward outcome driven measurements, very similar to what managers are measured on today.
Risk Management
New categories of risk are emerging with hallucinations, AI bias, data inaccuracy in reports, and an organization that is being asked to move faster becoming over-reliant on AI generated recommendations. The velocity is likely to increase before risk management strategies and tools catch up, and the impact of insufficient management will be a negative impact on the business. Much of this comes back to decision making frameworks and data to make good informed choices about how much of the AI firehose you allow to reach customers.
Team Topologies
A lot of the existing research on how teams work and how humans communicate will take time to accurately reflect AI supported teams. The correct team size, types of skills, ratios of skillsets, and appropriate span of control are all going to have to be reconsidered. Companies are just starting to experiment with what this might look like, there’s a lot of opportunity to learn and experiment on your own.
Go out and get started
The trick is to begin. Build something terrible, throw it away, start over, and work on your communication skills with your new teammates. Try different approaches, read about what others are doing, and exercise that curiosity muscle again.
Reach out, let me know what you build! Let me know what you observe - I’d love to hear about it.