Orders of magnitude matter. Things don't go up order of one every time. Organising people in a group requires disseminating information and its interpretation and, most importantly, its distribution. Remote working and asynchronous collaboration have become the norm, meaning that distributed teams have taken their silos of knowledge with them. The impact of this has compounded any organisations experiencing a skills gap with a skills distribution problem, increasing the importance of good documentation and transparent interaction processes. In this session, we'll go over the fundamentals of requirements engineering, looking at how orders of magnitude scale alongside the expansion of scope and additional requirements. Lastly, we'll discuss how you can apply elements of platform thinking to your everyday projects.
Orders & Magnitude

Transcription
Hello, my name is Jessica and I'm a developer advocate with LaunchDarkly. We are a feature management platform that helps you control your software by decoupling the process of deployment from releasing. And today we're going to talk about how orders of magnitude scale with complexity. So when something goes wrong, what's your first instinct? Do you ask for help?
You'd be forgiven for thinking that collaboration is the answer to your problems here. Because between the two idioms of a problem shared being a problem halved and two heads being better than one, it's understandable that you think that bringing another person to the mix can improve your chances of getting to a solution faster. But this isn't always the case. If you've played computer games, you might remember in Age of Empires 2, the best way to get a structure built quickly is to assign as many villagers to the task as possible. When it comes to digital infrastructure that exists outside of video games, or video games themselves, we have to factor in a few key differences. Now in the Mythical Man Month, a collection of essays on the craft of software development, we discussed the idea that the number of people and the amount of time required to get something done by aren't really interchangeable as commodities. And this is only true when a task can be partitioned amongst many workers with no communication amongst them. But when a task cannot be partitioned either due to its structure or its complexity, the application of more effort has no impact on the actual schedule. Now software development falls kind of in between these two categories. It can be partitioned, but communication is required between each subtask. And in these situations, the best possible outcome doesn't come close to an even trade off between people and hours. Now there are a few schools of thought around how the cognitive load around communication is at least distributed, but it's wide accepted that intercommunication, talking to each other, is where the majority of the work lies. Now if each part of the task you're working on needs to be separately coordinated, the effort required to complete that section of work, it increases at a near exponential level. Now the three people will require three times as much interaction as two people doing the exact same piece of work. Things only get compounded when you throw together group meetings. Now this projected timeline really shows Brookslaw in action, that adding people power to late software only stands to make it later. Now to tackle this, let's talk about the concept of collective intelligence. Collective intelligence is described by researchers as a group's capability to be able to perform well across a wide variety of tasks, and that's doing so together. Despite the evidence and utility of collective intelligence as an index of group level competence, collective intelligence as a subject matter is fairly new as a concept. But one study that sought to uncover where the magic happens with collective intelligence was based on League of Legends. It used the massive online battle arena game as a method of testing whether or not virtual teams paired based on their abilities could outperform those that had an in-depth understanding of one another. And what has League got to do with work? Well, virtual teams are very common these days.
You know, massive online battle arena games, they're characterized by their intensity, their need for fast decision making, and their competitiveness, which sounds pretty familiar to a lot of business operations. I think I'd recognize a few of those traits for sure. So what did this study find?
They found that when investigating the measure of collective intelligence, researchers found that the group's ability to be able to perform well at a task was largely transferable, meaning if they could do one thing well, they could probably do quite a few things well. But they noticed that verbal communication doesn't equate to a high level of collective intelligence. In fact, researchers found that people spoke less and used fewer words when communicating with each other, when they did achieve that sense of collective intelligence and therefore high performance.
One of the early observations of the studies was that the more skilled players tended to spend less time communicating by preference. You know, they're good at the game and that's where they want to spend their time and they minimize context switching and always just invert, like involuntarily engage in nonverbal cues.
Our taste at communication, mentioned a lot in this study, is the unexpressed recognition of the position of others. And most importantly, it leads to actions for common activity. In the context of a company, this is evidence for advocating for observability measures, opting for automation, but most importantly, using metrics that everyone genuinely understands.
If we don't understand the software that we're testing, we're likely to restrict the number of times that we deploy. And does anyone who's read Accelerate or follows the state of devops report knows that optimizing for team performance can be achieved by tracking these four key metrics. If the software that we're working with doesn't neatly fit into our heads, these metrics will have a near finite value. We won't be able to push past that middle layer and get into that elite teams layer that a lot of the devops reports talk about.
If we don't understand what we're building, we're likely to distrust our tests. Why did that thing pass? That's a very understandable and common question we find ourselves asking ourselves when we're stuck in a code editor. Our confidence deploy will plummet if we don't know what we're doing. And if we don't understand the services that we're building or that we're working with, we won't be able to quickly restore them when they do fall down. So in essence, if you minimize the scope, you might get to maximize the impact of what you're working on. Thanks very much. And I will see you for the Q&A.