Emerging Themes in Defense Tech: Investing in the Cost Curve
16 February 2023
By Arthur Karell, Partner at First In
We have seen a rush of investor interest in emerging defense technology companies going into 2023. That is no surprise. Publicly-traded defense industry stocks began to outperform the broader S&P index towards the end of 2021, as investors sought greater exposure to counter-cyclical industrials. War in Ukraine and commitments by Western allies for increased military spending added momentum to defense industry equities in a slowing economy. Stocks of Raytheon and Lockheed Martin were up by 40-60% in 2022, while the rest of the index was down 19% on the year. Investors seeking bargains also brought that bid to a wide variety of early-stage companies selling into the defense complex.
Private investment in technologies supporting national security is always a welcome development. From an investor perspective, however, risk/reward is becoming harder to discern. It’s certainly a more crowded market, but more importantly for those early-stage investors anticipating near-future spending and skating to the puck, the puck just got slapped across the ice in 2022.
While the military lessons being drawn from Russia’s invasion of Ukraine may reassure some that the U.S. is years ahead of competitors in technical capacity and training proficiency, it has also laid bare systemic underinvestment in the large masses of attritable systems required for any direct great power kinetic conflict lasting longer than a couple of weeks, including precision guided munitions, short-/medium-/and long-range air defenses, autonomous air, ground, and maritime vehicles, and even LEO space-based capabilities (which are vulnerable to interception).
As such, we see a renewed focus from defense customers on bending the cost curve of weapon systems, in order to be able to procure enough of them in sufficient numbers to be relevant in a near-peer fight. This is a central theme in First In’s approach to the defense market. We invest in companies that aim to provide existing or emerging defense capabilities at a fraction of the cost – driving better pricing for the customers (and taxpayers), along with margin advantages, across our defense technology portfolio. Two promising areas of innovation in particular are industrial automation of procurement priorities and products that drive down the cost of deploying computing capabilities across the enterprise.
Industrial automation reduces the expenditures necessary to achieve production of defense (or dual-use) articles, especially those that must be near-shored or onshored. It encompasses software and hardware enabling the design and manufacture of end-use weapon systems and platforms/vehicles, along with critical components like computer processors, power units, and communications systems. In either case of end-use items or components, industrial automation also enables the rapid ability to test continuously, which is a sine qua non of successful defense product development.
Not all production automation is relevant to our defense investing thesis; we only consider the industrial automation of product or systems categories that are priorities for defense customers. It’s not difficult, however, to determine what technologies are procurement priorities: they are listed as mission area categories, highlighted in wargame studies and Congressional testimony, and of course, discussed in requests for research and development proposals.
The cost curve heuristic can also be applied to the proliferation of computing across the defense enterprise and its distributed networks. We’ve written about distributed computing from the perspective of great power competition, and the urgency is equally clear through a procurement lens. The U.S. Department of Defense (DoD) is growing its total compute with every new cloud IT migration, weapon system, sustainment platform, and existing infrastructure retrofit; how quickly, cheaply, and effectively that compute can be provided is a major driver of vendor profitability. It’s no longer enough to point to the broad, double-digit CAGRs of cloud adoption across the U.S. Government (USG); agencies are now more likely to require tracking of (cloud-based) software deployment costs and savings.
Several aspects of this computing cost curve opportunity are of interest, including those employing artificial intelligence (AI) to achieve order-of-magnitude improvements in delivering and securing software products for defense. Just as one would expect emerging natural language processing, machine learning, and computer vision models, tools, and libraries to improve software development margins in the commercial sphere, so too will they increase the reach of early-stage defense ventures. We expect this trend to be especially true for cybersecurity at the cloud and infrastructure layers, and it will also require deployment of AI-based defenses against adversarial AIs.
Now that public market aerospace and defense equities are likely fully priced, the comps for privately-held defense technology companies are coming back to Earth. A cost-curve approach to early-stage investing is just as applicable, if not moreso, to a market that is facing top-line headwinds. Indeed, there is a small but material chance that Congressional Republican infighting in the near-term, and entitlement spending and debt service in the medium and long-term, may lead to the 2023 NDAA being a high-water mark for defense spending.
Whether or not the defense market maintains its 2022 momentum, investors cannot simply rely on a generalist approach of riding a rising tide, given the specialized needs of defense customers and the idiosyncratic nature of government contracting. First In is a team of former military and startup operators who are well acquainted with the interplay of policy, regulations, products, and market dynamics that constitute the inner workings of the defense industry. In an era of renewed geopolitical competition, we are eager to invest in early stage ventures that address core defense customer concerns of capability and cost.
Emerging Themes in Cybersecurity – 2023
26 January 2023
By Renny McPherson, Managing Partner at First In, and Brian Mongeau, Principal at First In
The proliferation of cyber attacks and cyber crime in recent years have made it clear that cybersecurity is, and will be for years to come, both an important investment theme and central to national and global security. From 2016 to 2021, the FBI reported that the costs of cybercrime increased 393% for American businesses to $6.9 billion. In the first half of 2022, cyberattacks grew 42% compared to the first half of 2021. Major cyberattacks made front page news as they caused disruptions across wide swaths of society. As the number of cyberattacks grew, bad actors continued to seek new vulnerabilities in complex digital systems and learned from both their successes and failures to hone future attacks.
Technological advances have led to more connected systems and devices — and therefore more opportunities for hackers — than ever. This trend is likely to accelerate, with one estimate projecting the number of connected devices to grow 2,400% over the current decade to reach 500 billion by 2023. Due to the vulnerabilities inherent in these digital systems, commercial enterprises, small- and medium-sized businesses, and governments are all increasingly aware of the cyber threats emanating from both traditional and nontraditional threat vectors. As a result, our 2023 focus areas will include newer cybersecurity subsectors that should continue to grow from a small base and holistic approaches to more traditional cybersecurity subsectors that require novel solutions:
- Operational Technology (OT) and Internet of Things (IoT)
OT/IoT is a nontraditional and newer subsector of cybersecurity that has received significant attention in recent years — yet gaps still remain, particularly in critical infrastructure. The number of OT cyberattacks grew rapidly in recent years, increasing 2,000% in 2020 compared to the year prior. The fallout from high-profile cybersecurity attacks since then demonstrate the problems that continue to persist and the potential for widespread negative ramifications across society that can occur from such attacks. As a case in point, the 2021 Colonial Pipeline attack by a Russia-linked cyber group was a defining moment for non-experts to appreciate enterprises’ digital operating vulnerabilities. Recognizing the need for greater OT security, First In invested in Shift5 in both 2021 and 2022 to support the company’s innovative military and transportation asset protection solutions. IoT, and even more greenfield space, provides a similarly situated vertical of opportunity for startups, with IoT endpoint connections projected to grow to over 25 billion in 2025 from 14 billion in 2022.
We wrote last year about how government regulation can help drive commercial adoption of crucial cybersecurity capabilities. With this in mind, it is notable that the US Government (USG) increased its focus on critical infrastructure OT in 2022. Legislatively, the Cyber Incident Reporting for Critical Infrastructure Act empowered Cybersecurity and Infrastructure Security Agency and places requirements on companies to share information on security breaches. Additionally, the National Security Council is leading joint public-private collaborative cybersecurity efforts in the energy and transportation industries, as detailed by Deputy National Security Advisor Anne Neuberger. The upshot of these efforts is a government-led tailwind for more robust and innovative OT security solutions to continue to develop in 2023 to fill gaps in a wide-ranging market.
- Threat-informed defense that unifies security for increasingly complex digital environments
The integration of threat intelligence, observability, and proactive defense in depth, now called threat-informed defense (TID), is a promising area for growth by bringing needed holistic approaches to traditional IT environments. Twelve months ago, TID was a relatively abstract concept that sought to build on threat intelligence capabilities by providing customers with actionable insights given their particular threat landscape, rather than merely providing large amounts of data that would often overwhelm security teams. As TID emerged, First In invested in Tidal in 2022 to help companies secure their operations with tailored assessments and solutions.
Going forward, TID innovation is likely to continue in several adjacent areas. Organizations’ modern, sprawling digital asset infrastructure systems are rarely properly inventoried, inhibiting effective cyber-risk analysis. Relatedly, solutions to unify cyber defenses will be crucial as inexorable — and complex — cloud migrations continue. Gartner projects cloud spending to increase 20% in 2023 as organizations continue migrating workloads to the cloud and 90% of companies operate multicloud environments. Evolving holistic solutions to these themes will be central to the early stage cyber landscape in the year ahead.
- Novel threats mean novel targets
As society and organizations have digitized at an accelerating pace, industries that have historically spent less energy, time, and money on cybersecurity are now enduring attacks. In addition to critical infrastructure, verticals ranging from transportation and logistical assets to biotech are all increasingly aware of the vulnerabilities they face, including from international adversaries. As cyber defenses against evolving threats proliferate, industry-specific solutions will also likely be needed — which will in turn unlock significant and, to-date, often untapped market opportunity.
- Threats to an expanding definition of digital systems
Historically, point solutions were the locus of the cybersecurity industry, but modern broad digital systems enable adversaries to access a vastly increased and complex threat landscape far beyond the confines of individual IT networks. As the quantitative growth of society’s connectivity continues, the forms of digitization likewise continue to evolve. From cloud-based edge devices to social media, low-cost democratized online access enables organizations and individuals to rapidly connect across vast geographic reaches than at any time in history. As a result, however, a vast amount of micro-level data is available for hackers to steal and weaponize at scale. Individuals can have their sensitive data harvested and exploited through multiple vectors, whether via breaches against organizations with which they are customers or through social media platforms they freely use. First In invested in the data privacy company 360 Privacy in 2022 in recognition of the need to provide greater protection for individuals’ digital footprints in an ecosystem no one individual can control on their own. Digital systems are now omnipresent across all elements of society. The need for protection across multiple layers of systems in our online world for both organizations and individuals is likely to increase over the next year as connectivity continues to grow.
The evolution of cyber attacks and cyber security over the last decade has been remarkable. Developments over the past three years in particular, have confirmed that the macro level need for more robust cyber defenses. Cybersecurity companies will need to adapt and develop novel solutions in the face of new challenges in the years ahead. Recognizing this dynamic, First In believes there are many opportunities for great cybersecurity companies to be built today – companies that address novel and growing threats.
Strategic Trends in Distributed Computing
14 November 2022
By Arthur Karell, Partner at First In
In February 2020, Huawei released a position paper that measured the aggregate total computing power of various nations. Most rankings of national computing focus on government-owned supercomputers, but this report instead analyzed distributed computing capability, combining cloud, device, and edge compute, adjusted for network limitations and dissipation effects. The paper – notable for its forward penned by Huawei’s chairman, unlike its other English language industry reports – has two key takeaways, both for investors and for all Americans concerned with China’s commercialization of geopolitical conflict. First, the current environment is a remarkable emerging opportunity for investors who track the enabling technologies of distributed computing. Second, the paper is a telling view of how established Chinese institutions view computing competition at the national level, particularly as they leverage economic power for political and military ends.
The concept of distributed computing dates back to the very first local-area computer networks and the invention of ethernet and ARPANET in the 1960s and ‘70s. The growth of the internet in the 1990s and the introduction of the smartphone (and other smart devices) in the early 2010s could be described as subsequent step changes for the capacity of distributed computing. As measured by the Huawei paper, the United States was home to a total of 2,522 gigaFLOPS (GFLOPS) of computing power. China trailed in early 2020 with 770 GFLOPS, but is closing the gap rapidly.
Our venture firm, First In, is investing in the technologists who are driving the world to the next frontier of distributed computing. With our investing focus on early-stage security technology, we are tracking a step change in distributed computing capability that is being driven by the convergence of three security-related trends in particular: zero-trust adoption, consensus mechanisms, and artificial intelligence. Together, these trends are remarkable opportunities for security technology investing – but also make “total computing power” a national security consideration.
Zero trust adoption. The design, implementation, and overall performance of zero-trust networks has been advancing steadily since Google’s internal Beyond Corp rollout in 2014, but we see an acceleration in the adoption of zero trust architecture across the commercial enterprise. Processing power, bandwidth, and – perhaps most importantly – cultural adoption of zero trust requirements have reached a breakthrough such that it’s not a best practice only reserved for the most-heavily resourced organizations. First In portfolio companies like ZeroTier and Appaegis are in the forefront of enterprise ZT adoption. In addition, methodologies such as zero knowledge proofs (ZKP) and fully homomorphic encryption (FHE) that until recently were purely academic pursuits are finally starting to bridge into the real world, thanks to new algorithmic techniques. Enterprise-scale FHE is no longer a distant cryptographic goal; startups are being built now that will be making it a reality.
Consensus mechanisms. Step outside the hype cycle for crypto and web3 and consider a fundamental security challenge to any distributed computing system: fault tolerance. The only way for a system of independent machines in a real-world environment to identify and overcome a component failure like a byzantine fault is through some consensus mechanism. Blockchain technologies represent the first practical means of fault-tolerant inter-machine consensus, and as such, have unlocked a new category of secure enterprise-grade application companies that First In will be exploring. Indeed, one of our portfolio companies, Antithesis, is central to automating testing of consensus mechanisms. Additionally, the decentralized nature of blockchain-based applications is immensely relevant to U.S. national security, not only for the defensive/law enforcement use cases that make the news, but also because of entirely new power projection capabilities coming online as well.
Artificial intelligence. Steady advances in neural network architecture and performance over the last several years are compounding in a non-linear manner, with startling new generative capabilities being introduced in only the last few months. We are focused on how AI will drive distributed computing acceleration by supercharging the efficiency and effectiveness of computing on the edge and by generating machine-speed offensive and defensive security strategies, postures, and proactive decision-making across an enterprise. “Security” in this context touches the traditional enterprise software considerations of authentication and monitoring. It also will include unstructured data analysis, automation, serverless computing, physical security, military hardware, and even social and economic applications like social engineering and business intelligence/analysis. First In portfolio companies like Grist Mill Exchange and Shift5 are at the forefront of this trend.
Unsurprisingly, Huawei’s position paper concludes that investment in distributed computing power is directly tied to increases in national geopolitical power. It is clear from the Chinese scientific, military, and political sources cited in the paper that deep thinking has gone into a strategy to advance China to more developed stages of distributed computing at every level of society. For the United States to maintain and grow its lead in aggregate national compute, policymakers as well as investors should also recognize and embrace enabling technologies for distributed computing as strategic investment opportunities.
Zero Trust: The Regulation Driving Market Growth and Innovation
28 March 2022
By Brian Mongeau, Principal at First In
Zero trust is now a central cybersecurity concept at the intersection of Western governments’ national security policies and private sector digital security practices. Though the idea had been slowly developing over the past half decade, it burst into public prominence and executive-level corporate discussions in May 2021 with the release of Executive Order 14208. The Biden Administration issued EO 14208 to improve the country’s cybersecurity posture across commercial and government entities alike, particularly in the network security domain, by outlining a unified federal cross-sector strategy for the first time. Crucially, EO 14208 created massive market opportunities for innovative cybersecurity startups by mandating the adoption of zero trust standards for all federal agencies, federal contractors, and contractors’ partners.
The speed at which zero trust transitioned from abstract concept to codification in government regulation was startling. Government regulations rarely move quickly, especially in the technology sphere, yet EO 14208 was released only eight short months after the publication of the U.S. Department of Commerce National Institute of Standards and Technology’s (NIST’s) SP 800-207. This relatively rapid process demonstrates an unusual degree of alignment between regulatory and academic stakeholders, particularly as cybersecurity policy has long been critiqued as a critical national security field without a unifying federal strategy. While the order was certainly not equal to legislation, it nonetheless represented a critical step in codifying government cybersecurity policy and providing guidance to the private sector in a critical national security domain.
EO 14208 outlined multiple steps and lines of effort to update, align, and operationalize national cybersecurity; four key points are of particular relevance to cybersecurity startup founders, the venture capital industry, and the general business community:
- A call for direct partnerships between federal agencies and the private sector, including both cybersecurity companies providing products and end-user organizations protecting data.
- A mandate to adopt zero trust architecture (ZTA) across all federal agencies.
- A timeline to update and implement software supply chain security measures across the federal government and among its contractors.
- The establishment of boards and advisory committees to ensure the implementation of cybersecurity efforts and improvement iterations on best practices and standards.
Combined, these pillars establish priorities and methods for public-private sector collaboration, as well as guidance for emerging cybersecurity startups and venture capital firms on how the U.S. federal government will direct spending efforts. Within this context, two elements are crucially important. The first is that all four key focuses are rooted in, and often explicitly require, ZTA. The second is that ZTA will not only be mandated for federal agencies, but will also be required for federal contractors and partners (i.e., software supply chain security). This requirement creates an extensive downstream ripple effect, as not only companies that work directly with the U.S. federal government – accounting for over $554 billion in contracts in FY2020 – will need to adopt ZTA, but also those that work with direct contractors. The FY2022 federal IT budget is $82.1 billion alone, requiring significant additional spend to secure according to ZTA standards and hinting at the much larger contractor and B2G ecosystem market opportunity. The upshot is that zero trust presents ample room for opportunity growth as the concept evolves and refines.
Implementing zero trust in network architecture requires an understanding of its concepts more than a focus on a prescriptive solution. ZTA will continue to evolve as cybersecurity companies seek to adapt products and services and end user organizations standardize cyber practices in response to EO 14208. The basis of ZTA, in line with NIST 800-207, is that continuous user authentication, authorization, and validation are necessary to grant and maintain access to protected resources, rather than accepting an assumption that static, perimeter-focused security networks can be trusted as secure. In adopting ZTA, organizations abandon the dangerous belief that security perimeters can be trusted, assuming that their systems have already been infiltrated. The leading industry analysis firm Forrester stresses that zero trust is not in itself a solitary product or platform; it is instead a framework to guide entities’ cybersecurity postures and strategies around the concept of “never trust, always verify” and “assuming breach.” Utilizing a zero trust framework, security teams must adopt security models and enabling products based on workloads, data, and identity awareness.,
The U.S. government’s focus on ZTA, and its codification in EO 14208, generates momentum for innovative startups that are able to shape the industry’s future while simultaneously aligning entrepreneurial efforts behind guiding principles. The ZTA framework offers significant commercial potential for new technologies and products as many cybersecurity incumbents compete to acquire zero trust-associated startups. As the concept evolves, opportunities will develop for entrepreneurs to create new solutions that focus on each element of zero trust, as well as product suites that better enable collaboration across and within commercial enterprises and government agencies alike.
Sources:
- Biden, Joseph R. Executive Order on Improving the Nation’s Cybersecurity. 12 May 2021.
- O’Connor, Nuala. “Reforming the U.S. Approach to Data Protection and Privacy,” Digital and Cyberspace Policy Program, Center on Foreign Relations. 30 January 2018.
- Office of the Chief Data Officer at the Bureau of the Fiscal Service, U.S. Department of the Treasury. “Contract Federal Explorer,” Data Lab, USAspending.gov.
- General Services Administration. IT Portfolio Dashboard. Accessed 24 March 2022.
- Rose, Mitchell, and Connelly. SP 800-207: Zero Trust Architecture. U.S. Department of Commerce National Institute of Standards and Technology. August 2020.
- Turner, Steve. “Zero Trust Is Not A Security Solution; It’s A Strategy,” Forrester. 18 February 2021.
- Cunningham, Chase. “A Look Back At Zero Trust; Never Trust, Always Verify,” Forrester. 24 August 2020.
Emerging Themes in Cybersecurity – 2022
15 February 2022
In 2022, we believe there will be significant innovation and opportunity in now core cybersecurity sectors that were “new” just a few years ago. These include zero trust, counter-phishing and cloud security. At the same time, we highlight newer cybersecurity sectors to include operational technology (OT) cybersecurity, web3 security, and threat-informed defense. Other sectors, such as cybersecurity for small- and medium-sized businesses, remain relevant and a focus for First In and will be covered in future articles.
The Evolution of Zero Trust
The concept of zero trust – that organizations should abandon the dangerous belief that security perimeters can be trusted and instead assume that their systems have already been infiltrated – was an important paradigm shift in cybersecurity. We believe zero trust will continue to evolve in new ways. A new slate of companies with solutions that build on first generation zero trust companies have emerged and are building at early stages. In a zero trust architecture, continuous user authentication, authorization, and validation are necessary to grant and maintain access to protected resources, rather than organizations trusting that their networks are secure. As such, security teams must adopt new security models and enable solutions based on workloads, data, and identity awareness. This evolution will create opportunities for new solutions that focus on each element of zero trust, as well as product suites that better enable collaboration across enterprises.
Zero trust is a key driver of opportunities in the network security market which Gartner estimates will grow at a 14% CAGR over the next three years to $57 billion in 2024.
Advances in Identity Protection
Identity protection and authentication (IAM) directly ties into the zero trust framework by serving to authenticate, authorize, and validate users’ identities within networks. The segment is an area that has already started to experience rapid growth and technological progress as workforces shift to remote structures and individuals become increasingly reliant on personal devices. Legacy IAM focused on passwords, but the segment has evolved to meet the challenges of networks’ growing complexity and structures, including biometric authentication and cloud architecture.
The IAM market is large at $23.7 billion and is expected to grow with considerable momentum at 11.5% CAGR to $32.8 billion by 2024. Going forward, passwordless authentication based on biometric authentication has the opportunity to be a prime driver of growth. Another key driver will be the subsegment of cloud IAM, which is expected to grow to $16.2 billion by 2027 at a 26.7% CAGR.
Counter-Phishing Beyond the Inbox
Despite the increasing complexity and scope of cybersecurity needs, the fundamentals remain as important as ever. The basics of security, however, are often overlooked. Counter-phishing defenses are a prime example of an overlooked security technique in need of new approaches, especially as attack surfaces become more complex. Deloitte estimates that 91% of all cyberattacks begin with a phishing email and that 32% of all successful breaches involve phishing techniques. Moreover, over 50% of phishing breaches occur via social media, outside of the traditional corporate inbox, while almost 70% of employees fail basic cybersecurity quizzes. Despite these conditions, legacy counter-phishing tools are reactive, scanning for malicious URLs based on previous attacks that have already occurred.
The spear-phishing market alone – a subset of phishing that targets specific rather than broad accounts – will grow to $1.9 billion at a 9.5% CAGR between 2020-2027. We believe startups in the broader counter-phishing segment have a real opportunity to bring innovation to this sector.
The Next Generation of Cloud Security
Next generation cloud security is primed for continued innovation as cyber threat landscapes evolve and new attack surfaces are exploited, especially at the infrastructure-as-a-service (IaaS) level. Relatively new entrants have grown strongly since the start of the Covid-19 pandemic, when workforces shifted to remote structures en masse for the first time. Though Covid-19 accelerated the shift to cloud network infrastructures, however, the basic trend of enterprises migrating to the cloud had already begun beforehand and will continue to do so. One outcome of the rapid transition to cloud infrastructures has been a less organized and more complicated ecosystem for enterprises to manage. Whereas organizations had deliberately planned cloud migrations in the years leading up to the outbreak of Covid-19, the rushed transition during 2020 and 2021 has produced a mixture of multi-cloud and on-premise/cloud hybrid systems. It is now estimated that 92% of enterprises use a multi-cloud strategy, 78% use hybrid cloud infrastructures, and have, on average, 2.6 public and 2.7 private cloud systems.
The cloud security market is currently at $34.8 billion cloud security market and is expected to grow at a 14.2% CAGR. Significant room still exists for innovation and startups focused on the next generation of cloud security, especially as cloud IaaS offerings evolve into new forms in need of new security solutions.
Operational Technology Cybersecurity
Operational technology (OT) cybersecurity will become a pressing issue as geopolitical and cybercrime events alike produce threats for critical infrastructure. Nation-state threats are evidenced by the ongoing government warnings of likely cyberattacks and spillover effects of cyberattacks. Recent examples include the breach of the New York City MTA by hackers with suspected links to the Chinese government this past summer. The Colonial Pipeline ransomware attack of spring 2021 likewise demonstrates the high financial costs for companies of cybercrime, as well as the harmful effects for society as a whole.
The market is still early in its development, with venture funding preceding growth in overall market size. Venture funding in OT/IoT increased 266% year-over-year (YoY) in 2021, with seven of the year’s 11 largest cybersecurity investments produced in the segment.
Threat-Informed Defense
Similar to zero trust, threat-informed defense (TID) is still in development and in the process of being defined for the cybersecurity market. TID was conceptualized by MITRE and “applies a deep understanding of adversary tradecraft and technology to protect against, detect, and mitigate cyber-attacks. It’s a community-based approach to a worldwide challenge.” TID goes beyond the crowded threat intelligence market to test organizations’ security capabilities against known threats and expected attack strategies at a tactical level. Whereas threat intelligence seeks to provide customers knowledge and visibility into threats, TID aims to operationalize cybersecurity capabilities based on adversaries’ known attack vectors through constant and iterative security team simulations and on the assumption that no cybersecurity solution is breach-proof. Given its proactive posture to defense, TID is likely to be a fast-growing next generation and evolution of threat intelligence solutions.
Security for Web3
Web3 is based on the architecture of decentralized applications (“dApps”) managed by blockchain, network nodes, and smart contracts. While many advantages flow from the dApp architecture of web3, security trade-offs have emerged, such as vulnerability to open source software supply chains in the absence of an organizational decision-maker. Log4j clearly highlights such a vulnerability that can be exploited. The high upside potential of web3 and an ambiguous future that is still being molded led to web3 security startups attracting over a 10x YoY increase in venture investments in 2021, totaling more than $1 billion in venture investments. The overall market is still very early and we assess it will grow rapidly over the next few years.
Sources:
- Rose, Mitchell, and Connelly. SP 800-207: Zero Trust Architecture. U.S. Department of Commerce National Institute of Standards and Technology. August 2020.
- Turner, Steve. “Zero Trust Is Not A Security Solution; It’s A Strategy,” Forrester. 18 February 2021.
- Cunningham, Chase. “A Look Back At Zero Trust; Never Trust, Always Verify,” Forrester. 24 August 2020.
- Appgate, Inc. S-1 Filing with the U.S. Security and Exchange Commission. 28 January 2022.
- 2021 Annual Information Security Update. Pitchbook. 31 January 2022.
- “The Worldwide Cloud Identity and Access Management Industry is Expected to Reach $16.2 Billion by 2027 – ResearchAndMarkets.com,” BusinessWire. 09 December 2021.
- “91% of all cyber attacks begin with a phishing email to an unexpected victim,” Deloitte press release. 09 January 2020.
- Global Spear Phishing Markets Report 2021-2027, BusinessWire. 21 July 2021.
- Flexera 2021 State of the Cloud Report. 2021.
- “Global Cloud Security Market (2021 to 2026),” BusinessWire. 05 August 2021.
- “Focal Points: Threat-Informed Defense.” MITRE.
- Metinko, Chris. “Venture Investment In Cryptosecurity Jumps 10x Over Last Year As Sector Hits Sweet Spot With Venture Capitalists,” Crunchbase News. 17 August 2021.
What Kind of Entrepreneurs Can Prevent the Next Colonial Hack
By Reed Simmons, MBA associate at First In, and Renny McPherson, managing partner of First In
Two weeks ago, at gas stations along the US east coast, the tangible effects of cyber attacks came into sharp relief. Over the last year, there have been three major public cyber attacks against the United States and its interests — the massive Solarwinds software supply chain intrusion, the broad compromise of Microsoft Exchange servers, and most recently the attack on Colonial Pipeline, enabled and managed by the criminal group DarkSide. The attacks highlight two key developments in the cybersecurity ecosystem: the expanding attack surface and the democratization of advanced threats. While the first two are attributed to nation states, the Colonial Pipeline attack was orchestrated by a criminal group. We believe these trends will continue to accelerate, placing a premium on cybersecurity companies and professionals with first-hand experience in all elements of cyber security. Military and intelligence community veterans have such experience, and they are uniquely positioned to build the cybersecurity solutions of the future.
As an organization’s software ecosystem grows in complexity, the number of potential cyber vulnerabilities and attack vectors—the totality of which is called the attack surface—expands exponentially. The cyberattacks of the last year exposed these growing vulnerabilities through complex methods of compromise. Hackers exploited previously unknown “zero-day” vulnerabilities in Microsoft Exchange servers to gain access to thousands of organizations’ networks. The SolarWinds supply chain compromise utilized a different intrusion set, exploiting a trusted third party software vendor to breach networks via push updates. As we write, DarkSide’s attack vector into Colonial Pipeline remains unknown, but the takeaway is clear: threat actors are aware of the growing attack surface and exploit the many vectors with growing sophistication. Moreover, anyone is a potential victim as China’s indiscriminate deployment of “web shell” backdoors to tens of thousands of servers demonstrates.
In the military and intelligence community, threat is defined as the product of capability and intent. We see a clear trend: as advanced cyber attack capabilities proliferate outward—spurred, in part, by open-source collaboration—barriers to entry for criminal and state actors are reduced. In turn, the cost-benefit analysis for more and more criminal groups and state and pseudo-state actors is clear. They can produce asymmetric returns on operational time invested. DarkSide’s attack is an example of the democratization of advanced threat: a group of non-state cybercriminals, half a world away, had the capability to shut down US critical infrastructure. While shutdown may not have been DarkSide’s ultimate purpose, that is no reason for comfort – there is hardly a dearth of malintent. We expect such attacks to multiply.
At First In, we believe military and intelligence community veterans’ experience at the leading edge of understanding attack vectors and devising cyber security solutions, imparts the perspective, technical skills, and community to uniquely understand and counteract state and criminal cyber actors. Cyber startup companies will need to counter advanced threats on behalf of the private sector, critical infrastructure, and the government. Veteran entrepreneurs are well positioned to create some of the most promising cyber startups over the next several years. This is particularly true as the line separating attacks on public and private sectors has blurred significantly. Already the federal government is acting on a new model of cyber-resiliency, around Zero Trust, to modernize the nation’s cyber defenses in partnership with the private sector. As companies like Mandiant/FireEye and Tenable show, military veterans have a track record of success in the cybersecurity market. Yet as a demographic they remain broadly underserved from a capital perspective. More venture firms need to appreciate the diverse perspective that veterans can bring, and bridge this gap to unleash their potential.
Sources:
- “How Should the U.S. Respond to the SolarWinds and Microsoft Exchange Hacks?” Accessed May 20, 2021. https://www.lawfareblog.com/how-should-us-respond-solarwinds-and-microsoft-exchange-hacks
- “Executive Order on Improving the Nation’s Cybersecurity.” Accessed May 20, 2021. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/
- “Veteran Entrepreneurship: Access to Capital Challenges and Opportunities.” Accessed May 20, 2021. https://ivmf.syracuse.edu/wp-content/uploads/2019/11/IVMF_Access-to-Capital-Challenge_Nov-2019_kksrvm.pdf
Emerging Themes in Cybersecurity – 2021
By Renny McPherson and Dr. Josh Lospinoso
The cyber security landscape is evolving rapidly as the attack surface for cyber attack grows exponentially due to mega-trends in how people live today: more devices, more digital everything, more open source, more enterprises developing software, and everything digital being connected.
Covid’s work from home mandates have exacerbated the risk. As such, there are many opportunities for startups to have an impact by addressing a new, modern theme or taking a new approach to a long-standing cyber segment such as endpoint protection. Below, we outline eight themes of interest for First In this year. This list is far from exhaustive, as there are many segments within cyber security which present opportunity.
The Long Tail
Small and medium-sized businesses (SMBs) are increasingly at risk of cyber attack, and enterprises are more and more vulnerable to supply chain risk from their vendors and partners. Themes that have worked in enterprise are now more necessary, at a lower price point and with more ease of use, to SMBs.
We will devote a follow-on post to the long tail of risk.
Data Security
Enterprises generate and retain massive amounts of data. It’s important to secure this data with a combination of filtering, blocking, and remediating techniques. Data security platforms will integrate directly with other data platforms to monitor, provide backups, and ensure compliance. There are a lot of incumbents in this space but we believe this is a growing segment.
We are keeping an eye on data encryption startups who are answering the call for quantum-resilient encryption techniques. While the technological problem clearly exists, companies are still working to find viable business models for their technological solutions.
We believe that data vaults are an investment opportunity in this space. If service providers host highly secure data and expose it as a service to customers, they can neatly solve several pain points at once. These so-called “data vaults” transfer risk to the service provider.
Major players in this space include Very Good Security, Evervault, and Skyflow.
Application & Composition Analysis
COVID-19 exacerbated the pressure on technology organizations to integrate security into multiple phases of the software development lifecycle. Over the next several years, teams increasingly will integrate security into their build phases. Startups in this space will offer tools to detect vulnerabilities in software dependencies and perform software composition analysis.
Major players in this space include Sonatype, Snyk, Whitesource, and Micro Focus. Phylum is an upstart taking a next generation approach. Rather than match known vulnerabilities against open source package versions, Phylum ingests terabytes of open source code and performs analysis to find unknown vulnerabilities, identify dependency risk, and mine for malicious activity. Earlier this year, First In led Phylum’s seed stage financing.
Application Security Orchestration and Correlation
While application security is a burgeoning industry, we believe there will be a major growth in the amount of tools available to enterprises. These tools will require integration and correlation. As this market will likely be fragmented, there will be startups rising to integrate the complementary solutions and improve end-user experiences. This market is poised to break out.
Emerging companies in this space include Code Dx and ZeroNorth.
Cyber Insurance
Cyber insurance is still in its early days, with major insurance providers finding their footing in this essential market. In the race to maturity here, look for more news such as the recently announced partnership between Google, Allianz and Munich Re. With breaches rising every year and cybersecurity spending rising yearly too, a risk transference mechanism is necessary. Large insurance companies are not as well-suited to copy-pasting life insurance actuarial tables onto the cyber risk paradigm. As a result, this is a ripe market for small, nimble companies with strong risk assessment chops, to stand out.
We believe that a core problem in cyber insurance is information. Insurers simply have a difficult time quantifying risk. The insured, especially small and medium sized businesses, want to mitigate what they can and transfer the rest without thinking about it too much. We believe there’s a large market opportunity for companies to address both issues at once. By pairing cybersecurity assessments with insurance, the same entity can perform a service to the SMB (cybersecurity risk) and more accurately understand what they’re insuring. Finally, it becomes possible to price cybersecurity mitigations based on how they impact insurance premiums.
Emerging companies include Coalition, Cowbell, and Trava.
Unifying Security in the Cloud: CSPM, CWPP and GRC
As containerization permeates everything, cloud workload protection platforms will become essential additions to cloud access security broker offerings. This is a hot space with recent acquisitions by Palo Alto, McAfee, Cisco, CheckPoint, and Fastly. As Kara Nortman of Upfront Ventures hypothesizes, the “Rise of the Multi-Cloud” will be a core driver for cybersecurity tool demand. While 93% of enterprises intend to use a multi-cloud strategy, cybersecurity products aren’t built for a cloud-first world.
Caveonix created a single integrated platform for automated compliance, cloud security posture management (CSPM), cloud workload protection (CWPP)and governance in a hybrid and multi-cloud environment. First In led Caveonix’s $7M Series A in Q4 2020.
Identity and Access Management
Identity and access management manages permissions across an enterprise. It helps customers manage employee and customer identities and ensures privacy preferences and access provisioning safeguard sensitive services and data. This is a large and growing market
We believe that there’s a major opportunity for players to develop better rules management for IT and security teams. Currently this is an error-prone and labor intensive process.
There continues to be a major opportunity for evolving beyond passwords and multifactor authentication. Based on behavioral analytics and the device used for access, there are possible replacements such as Zero-Factor Authentication.
Major players in this space are Beyond Identity, Forter, Mati, JumpCloud, and Alloy.
New Approaches to Endpoint Security
Endpoints are remote devices that provide services and process data. These devices, like computers, phones, network gear, and servers, remain critical. This is a very established and large segment in the information security field, and we view this as very difficult for new players to penetrate as it is such a crowded field.
However, there are some subsegments that offer opportunity. Internet of Things and Operational Technology, for example, represent a new frontier of cybersecurity that we believe represents a huge opportunity.
We believe there’s opportunity in the Extended Detection and Response (XDR) space. This represents a potential next generation of endpoint security, where detection and response are automated. Startups with a superior product could challenge increasingly outdated antivirus solutions, and labor-intensive security information and event management software incumbents.
The Rise, Ubiquity and Vulnerability of Open Source Software
By Renny McPherson, managing partner of First In and Matthew Dulaney, director of operations (summer associate) at First In.
Open source software is ubiquitous today. That’s a good thing. But it wasn’t always clear open source would win. Reviewing the history of proprietary and open source software development can help us understand how open source became so widely used and how open source software came to be both incredibly valuable to the world, and incredibly vulnerable as a threat vector for cyber attacks.
Let’s start with the brief history. In the early 1970s, universities and research organizations were the only institutions with the resources and demand to purchase computers with sufficient functionality to be usable. MITS (Micro Instrumentation and Telemetry Systems) changed that with the Altair 8800 microcomputer, which began to bring computing mainstream. Bill Gates and Paul Allen designed a BASIC interpreter for the device, fuelling both Altair’s sales and the success of their own company, Microsoft. By 1976, BASIC was used by most Altair owners; however, few owners actually paid for the software. Instead, many people loaded friends’ purchased copies onto their machines.
Bill Gates wrote an “Open Letter to Hobbyists” in 1976, and accused users of stealing their software. The letter was an assault on the software development community, who embodied what would now be considered open source values of decentralization and open information sharing stemming from the early days of computing. “Hobbyists” would riff off of Gates’s code and share their own versions for free — other developers would take those modified editions and further adjust the code, spreading a network of free software based on the original code written by Microsoft. Gates condemned code sharing, instead advocating professional, paid software development.
Proprietary software — which can be called “closed-source” software, as opposed to open source software — dominated the 1980s and much of the 1990s. Software was sold attached to hardware products, and users could not access or modify source code behind their products. Microsoft, after Gates’s “Open Letter to Hobbyists,” continued to criticize the principles behind open source.
Meanwhile, an MIT computer programmer, Richard Stallman, was inspired to establish a free operating system, GNU, in 1983. Stallman had programmed printer software to notify users and to pause printing when a printer was jammed. When a new printer arrived with closed-source software that inhibited his ability to program the printer in the same way, he created and began to build a new operating system. He took it further. Stallman quit MIT to continue developing GNU, and his strict adherence to free software is codified in the GNU General Public License, or GPL. The GPL prevents open source software developers from maintaining exclusive rights to their software or charging others for its use. Moreover, the GPL prevents users of GPL-licensed software from placing restrictions on or monetizing software they develop using other GPL-licensed software.
In 1991, Linus Torvalds created Linux with GNU’s tools. Linux — a portmanteau of Unix and his first name — is, strictly speaking, a kernel of an operating system, providing developers extensive flexibility to write programs that meet their needs. Licensed under GNU’s GPL, Linux is steeped in open source orthodoxy: users can freely use and alter the OS’s code, but in doing so must publish their modifications and projects for others to access.
While Linux has had a massive impact in the history of open source software development, its early success was limited. An initial point of contention was rooted in doubt that a mass of amateur, part-time coders could effectively and consistently create usable software. Another roadblock was Linux’s complexity compared to meticulously developed alternatives.
Linux’s popularity exploded for those willing to spend time untangling its complexity in return for its power. Soon enough, companies like Red Hat began to develop Linux-based toolkits, which allowed developers to harness Linux’s vast functionality. Red Hat built a careful business model to take advantage of open source without monetizing open source software per se. They would assemble open source code, polish it, and release two versions: a free version with just the source code, and a paid version which included the source code and how-to guides, serial numbers, and customer support for the software. They appeased hardcore open source developers, offered prices that were a fraction of the competition, and made money along the way. Its popularity surged.
Linux’s burgeoning success converted many previously hardline anti-open source software developers. One such convert is Eric Raymond, who published his experience with open source in The Cathedral and the Bazaar. Raymond initially believed proper software development necessitated careful design by small teams, with no early beta release. He was a convert and stated Linus Torvald’s genius was to “release early [and] release often” (p. 29) and treat users as co-developers (p. 27). He also debunks the claim that open source software is inherently inferior to proprietary alternatives: “Quality [is] maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers. To the amazement of almost everyone, this work[s] quite well.”
Raymond’s essay caught the attention of executives at Netscape, maker of the popular Navigator web browser. Soon after Raymond’s essay was published, the company decided to release the source code for Navigator 5.0, kicking off the Mozilla project. This gave further legitimacy to open source. From the Mozilla project’s ashes, developers at AOL (who acquired Netscape) created a sleeker version of the browser called Mozilla Firefox in 2004. Firefox challenged Internet Explorer’s dominance, and Firefox had 100 million downloads within a year and a half and 1 billion downloads by 2009. Where Navigator 5.0 was a jumbled mess of features and code, Firefox was sleek and user-friendly.
As Firefox grew in popularity, Linus Torvalds himself was advancing another key pillar of open source software development: Git. Git allowed developers to track revisions and easily implement source code changes, bringing transparency and elegance to the previously clunky version control scheme. Git’s tools were consolidated in 2007 with the advent of GitHub, a free repository of open source code. The current GitHub workflow begins with branching, where developers essentially create a new environment where they can tweak code without directly impacting the master branch. In the new branch, developers “commit” new code to the existing project, adding and testing new features separate from the core project. Commits also track developments, allowing project owners to understand from whom changes came and reverse progress if bugs are discovered. Developers solicit community feedback with Pull Requests, then, once satisfied, deploy a branch by merging it with the master project.
GitHub facilitates open source development by tracking developer histories and allowing developers to establish reputations for their contributions in GitHub. The branching and merging process addresses version control, while profile tracking makes developer histories transparent and allows software owners to better evaluate incoming changes to their code.
Open source completed its rise, ubiquity, and became an official part of the mainstream when Microsoft purchased GitHub for $7.5 billion of Microsoft stock. The acquisition marks a stark turnaround in sentiment from Bill Gates’s Open Letter, and from the early 2000s when then-CEO Steve Ballmer called Linux “a cancer”. If Netscape’s embrace of open source in 1998 offered credibility and allowed corporations to follow suit and consider similar adoption, Microsoft’s acquisition solidified open source as the dominant software development ethos. GitHub plays host to major corporations’ source code, including Facebook, Amazon, and Google, and continues to be the default for software developers worldwide. Per CNBC, “The success of open source reveals that collaboration and knowledge sharing are more than just feel-good buzzwords, they’re an effective business strategy.”
Software developers of all kinds — from tech giants to amateurs — continue to rely on open source code in software development, allowing developers to harness others’ ingenuity to create high quality programs. However, open source remains imperfect, as demonstrated by higher volumes of software bugs and vulnerability to cyber attacks reported in recent years. Notably, Google in 2014 disclosed the now-infamous Heartbleed bug in OpenSSL: over 500,000 websites used OpenSSL’s Heartbeat (the program afflicted by Heartbleed), and thus were vulnerable to attack. Companies at risk ranged from Twitter to Github to the Commonwealth Bank of Australia. This incident highlighted a crucial vulnerability in open source. Essential programs, like OpenSSL, are imperative to the success of major companies and projects, but lack security oversight.
Experts agree: “Windows has a dev team. OpenSSL these days is two guys and a mangy dog,” says Matthew Green, assistant professor at Johns Hopkins. Writes Chris Duckett of ZDNet: “In years past, it was often the case that businesses took the view that all that was needed was to drop source code on a server, and the community will magically descend to contribute and clean up the code base. Similarly, users of open source software wrongly assume that because the code is open source, that an extensive review and testing of the package has occurred.” “The mystery is not that a few overworked volunteers missed the bug,” says OpenSSL Foundation former President Steve Marquess, “The mystery is why it hasn’t happened more often.”
Today, given how open source evolved, it is no one’s specific job to secure it. Code with open source dependencies relies on potentially thousands of developers, exposing software developers to upstream attacks from bad actors writing malicious software into open source packages that are then included in other projects.
Humanity is reliant on open source software. It’s time to ensure security is present in building and reviewing open source code. Large enterprises are starting to pay attention to this burgeoning vulnerability and agree this problem needs to be solved. New companies are being created in real-time to address this gap, in order to ensure open source can continue to provide extreme value – without today’s vulnerabilities – for everyone.
Sources:
- “#1 The Product Graveyard – Why Did Netscape Fail.” Accessed July 8, 2020. https://airfocus.com/blog/why-did-netscape-fail/.
- “A Git Origin Story | Linux Journal.” Accessed July 8, 2020. https://www.linuxjournal.com/content/git-origin-story.
- “About Pull Requests – GitHub Docs.” Accessed July 9, 2020. https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests.
- National Museum of American History. “Altair 8800 Microcomputer.” Accessed June 30, 2020. https://americanhistory.si.edu/collections/search/object/nmah_334396.
- “Ballmer: ‘Linux Is a Cancer.’” Accessed July 9, 2020. https://www.theregister.com/2001/06/02/ballmer_linux_is_a_cancer/.
- Brookes, Joseph. “Open Source Has Won Says GitHub CEO, Amid ICE Controversy.” Which-50, November 14, 2019. https://which-50.com/open source-has-won-says-github-ceo-amid-ice-controversy/.
- “History of the Open Source Effort,” December 6, 1998. http://web.archive.org/web/19981206185148/http://www.opensource.org/history.html.
- “January 22, 1998 — the Beginning of Mozilla | Mitchell’s Blog.” Accessed July 7, 2020. https://blog.lizardwrangler.com/2008/01/22/january-22-1998-the-beginning-of-mozilla/.
- Knorr, Eric. “Linux at 25: An Ecosystem, Not Only an OS.” InfoWorld, August 22, 2016. https://www.infoworld.com/article/3109891/linux-at-25-an-ecosystem-not-only-an-os.html.
- Kornblum, Janet. “Netscape Sets Source Code Free.” CNET. Accessed July 7, 2020. https://www.cnet.com/news/netscape-sets-source-code-free/.
- Mhatre, Saurabh. “The Untold Story of Github.” Medium, October 24, 2016. https://medium.com/@smhatre59/the-untold-story-of-github-132840f72f56.
- Moody, Glyn. “Mozilla and the Open Source Browser Bonanza.” Computerworld, April 4, 2013. https://www.computerworld.com/article/3423622/mozilla-and-the-open source-browser-bonanza.html.
- ———. “The Netscape Story: From Mosaic to Mozilla.” Computerworld, December 31, 2009. https://www.computerworld.com/article/3422414/the-netscape-story–from-mosaic-to-mozilla.html.
- “Open Source FAQ.” Accessed July 8, 2020. https://www-archive.mozilla.org/src-faq.
- GrowthHackers. “Red Hat: How They Developed a Big Idea That Shook Up A Huge Market.” Accessed July 8, 2020. https://growthhackers.com/growth-studies/red-hat-how-they-developed-a-big-idea-that-shook-up-a-huge-market.
- “Salon 21st | Let My Software Go!,” December 6, 1998. http://web.archive.org/web/19981206161046/http://www.salonmagazine.com/21st/feature/1998/04/cov_14feature.html.
- Shankland, Stephen. “20 Years Ago, Mozilla’s Move to open source Its Browser Was Radical. Now Even Microsoft’s a Convert.” CNET. Accessed July 8, 2020. https://www.cnet.com/news/mozilla-open source-firefox-move-helped-rewrite-tech-rules-anniversary/.
- “Steven-Levy-Hackers-Ch1+2.Pdf.” Accessed June 30, 2020. https://classes.visitsteve.com/hacking/wp-content/Steven-Levy-Hackers-ch1+2.pdf.
- “The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary.” Choice Reviews Online 39, no. 05 (January 1, 2002): 39-2841-39–2841. https://doi.org/10.5860/CHOICE.39-2841.
- “The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary.” Choice Reviews Online 39, no. 05 (January 1, 2002): 39-2841-39–2841. https://doi.org/10.5860/CHOICE.39-2841.
- “Timeline – MozillaWiki.” Accessed July 8, 2020. https://wiki.mozilla.org/Timeline.
- “Understanding the GitHub Flow · GitHub Guides.” Accessed July 9, 2020. https://guides.github.com/introduction/flow/.
- Venezia, Paul. “Linux at 25: How Linux Changed the World.” InfoWorld, August 24, 2016. https://www.infoworld.com/article/3109204/linux-at-25-how-linux-changed-the-world.html.
- Warren, Tom. “Here’s What GitHub Developers Really Think about Microsoft’s Acquisition.” The Verge, June 18, 2018. https://www.theverge.com/2018/6/18/17474284/microsoft-github-acquisition-developer-reaction.
- “What Exactly Is GitHub Anyway? | TechCrunch.” Accessed July 3, 2020. https://techcrunch.com/2012/07/14/what-exactly-is-github-anyway/.
- “What open source Culture Can Teach Tech Titans and Their Critics.” The Economist. Accessed July 8, 2020. https://www.economist.com/business/2019/07/20/what-open source-culture-can-teach-tech-titans-and-their-critics.
Why Technical Debt Matters to Business Executives
“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.”
— Ward Cunningham, 1992
The concept of debt is fairly straightforward: you can pay now, or you can pay later with interest. Any CEO knows their company’s debt load or can ask the CFO when they need a reminder. Yet, even though those same executives understand that software and digital strategy are key to the long-term success of their companies, the concept of technical debt (TD) is still not well understood outside of IT circles, despite this year being the 25th anniversary of Ward Cunningham coining the term. Technical debt is traditionally associated with software development, but it is applicable to any discipline where technology is being used to solve problems and glean insights. Like real debt, TD can be simple debt or it can accrue interest in the form of increased maintenance, complexity, lack of flexibility, and manual labor.
Over the years, we’ve lived through the decisions and consequences of taking on technical debt. Below, we describe why technical debt occurs and provide our views on how to manage it.
Why technical debt happens
In and of itself, technical debt is neither bad nor good — it is a choice. Like financial debt, it can be used as a tool to provide leverage to address a lack of time, resources, or information. Unlike financial debt, however, frequently it is the people towards the bottom of the org chart who are empowered to make the decisions as to when and how much to take on. But these decisions have broad consequences for how a company pursues its goals, and executives need to play an active role in shaping their TD strategy.
For high-functioning teams, technical debt is the result of doing a cost-benefit analysis. For example:
- A software team may work with product owners to decide that it is better to get a product, update, or feature out and into users’ hands, knowing they’ll likely have to tune it over time, than to attempt to build it perfectly from the start.
- In order to fix a production issue, an engineer may deploy a brittle, stopgap fix in order to buy the team time to diagnose and develop a better long-term solution.
For other teams, TD is not a deliberate choice. They can create TD due to shortcomings in leadership, documentation, skill level, collaboration, and process. For example:
- Business executives may create TD by requiring constant changes, not planning well, and over-promising custom deliverables to clients and stakeholders. In order to meet their deadlines, developers take shortcuts and are not allotted time to remove legacy code. If left unchecked, the team will be left with a complex, fragile code base filled with vestigial components that people are scared to touch because every change results in at least one unintended consequence.
- Development teams may create TD by not articulating the need to address TD in a timely fashion. Often developers make the choice to incur TD in order to make up for underestimating a task and then fail to follow up with product owners to properly prioritize and schedule paying it down.
Technical debt tends to accumulate naturally over time. In an example from our own company, our original user-interface (UI) had been outsourced, and as we built our own internal engineering team, we took on more ownership of the UI. At the same time, we were also a typical early-stage product company that was developing features as quickly as possible and innovating as we went. After a few months, the team broached the topic of paying down technical debt with the CEO. We didn’t want to just address the debt, we essentially wanted to start over and rewrite the UI. Why? For one, the UI had become fragile as we built out more and more features. Simple changes required more effort and more testing. The current implementation was also lacking several large, critical features that were going to require significant effort to implement. We saw the UI as more or less a prototype, and recognized its shortcomings as a stable platform.
We presented to leadership the three choices: (1) continue doing what we had been doing; (2) address the debt in the existing UI; or (3) come up with a plan to transition to a new, 2.0 version. As a team we discussed the implications of each. Everyone agreed that the status quo was not really an option, and we didn’t consider that for long. It came down to which would be better: trying to fix a fragile system or starting fresh with a deeper understanding of where we wanted to go.
Before we could make this decision, however, we needed to consider more than just the technical justifications. A major consideration was the impact on both our existing clients and prospects. In the end, we worked out a plan that enabled us to satisfy both our business and technical needs. We agreed that we’d freeze all feature development on the existing UI and only take on TD to work on critical patches for it. In parallel, the team would begin working on 2.0.
None of this was easy; it required compromises by all parties. Throughout the execution of the plan, we met and discussed the TD, opportunity costs, and risks of making or not making each change to the original UI. Needless to say, there was friction, but the timing was right. Had we waited much longer, we would have lost a window of opportunity; had we started much sooner, we would have had fewer insights into what our 2.0 release really required to be successful.
What to do about technical debt
When you’re facing down a backlog of technical debt, with all of its complexities and follow-on effects, it can be hard to see the forest for the trees. To address technical debt head-on, we take a three-phase approach: assign, assess, and account.
Assign – Executives should make engineering and technical leaders responsible for measuring TD, just as a CFO is responsible for reporting financial debt at executive briefings. At a monthly executive meeting, an engineering leader should be able to brief the executive team on how much TD there is so the team can best plan for resource allocation. You should also make sure the services team that implements solutions specific to client demand is working closely with product and software developers to implement generalized solutions to those issues. The goal is to empower your teams to move toward a comprehensive, quantitative understanding of the company’s TD.
Assess – To facilitate this move away from scattered, anecdotal evidence of technical debt, companies need to create a way for their teams to measure and discuss TD. Most engineering teams are probably already tracking technical debt in their issue tracking system. If not, they could start somewhere simple like a wiki page. For planning, transparency, and forecasting, it’s important that these measurements do not stay siloed in your IT department. Business executives should schedule regular updates on TD from the technology team with an understanding that it is a natural consequence of how the organization operates as a whole.
Account – Plan for and schedule technical debt payments in two ways. First, leave some room in every development cycle (aka sprint) for developers to address TD they encounter. Second, know that some TD is too large to just be swept under the rug or cleaned up when encountered. As you learn when and why to adjust away from quick fixes toward permanent solutions, you will see the value in scheduling larger efforts to address TD into your delivery schedule.
Companies large and small devote significant portions of their budget to IT and do not have a firm grasp on the debt that they accrue. Knowing and managing versus not knowing technical debt levels will separate top performers from the rest as large enterprises and SMEs become increasingly dependent on technology for competitive differentiation.