What Kind of Entrepreneurs Can Prevent the Next Colonial Hack

By Reed Simmons, MBA associate at First In, and Renny McPherson, managing partner of First In

 

Two weeks ago, at  gas stations along the US east coast, the tangible effects of cyber attacks came into sharp relief. Over the last year, there have been three major public cyber attacks against the United States and its interests — the massive Solarwinds software supply chain intrusion, the broad compromise of Microsoft Exchange servers, and most recently the attack on Colonial Pipeline, enabled and managed by the criminal group DarkSide. The attacks highlight two key developments in the cybersecurity ecosystem: the expanding attack surface and the democratization of advanced threats. While the first two are attributed to nation states, the Colonial Pipeline attack was orchestrated by a criminal group. We believe these trends will continue to accelerate, placing a premium on cybersecurity companies and professionals with first-hand experience in all elements of cyber security. Military and intelligence community veterans have such experience, and they are uniquely positioned to build the cybersecurity solutions of the future.

As an organization’s software ecosystem grows in complexity, the number of potential cyber vulnerabilities and attack vectors—the totality of which is called the attack surface—expands exponentially. The cyberattacks of the last year exposed these growing vulnerabilities through complex methods of compromise. Hackers exploited previously unknown “zero-day” vulnerabilities in Microsoft Exchange servers to gain access to thousands of organizations’ networks. The SolarWinds supply chain compromise utilized a different intrusion set, exploiting a trusted third party software vendor to breach networks via push updates. As we write, DarkSide’s attack vector into Colonial Pipeline remains unknown, but the takeaway is clear: threat actors are aware of the growing attack surface and exploit the many vectors with growing sophistication. Moreover, anyone is a potential victim as China’s indiscriminate deployment of “web shell” backdoors to tens of thousands of servers demonstrates.

In the military and intelligence community, threat is defined as the product of capability and intent. We see a clear trend: as advanced cyber attack capabilities proliferate outward—spurred, in part, by open-source collaboration—barriers to entry for criminal and state actors are reduced. In turn, the cost-benefit analysis for more and more criminal groups and state and pseudo-state actors is clear. They can produce asymmetric returns on operational time invested. DarkSide’s attack is an example of the democratization of advanced threat: a group of non-state cybercriminals, half a world away, had the capability to shut down US critical infrastructure. While shutdown may not have been DarkSide’s ultimate purpose, that is no reason for comfort – there is hardly a dearth of malintent. We expect such attacks to multiply.

At First In, we believe military and intelligence community veterans’ experience at the leading edge of understanding attack vectors and devising cyber security solutions, imparts the perspective, technical skills, and community to uniquely understand and counteract state and criminal cyber actors. Cyber startup companies will need to counter advanced threats on behalf of the private sector, critical infrastructure, and the government.  Veteran entrepreneurs are well positioned to create some of the most promising cyber startups over the next several years. This is particularly true as the line separating attacks on public and private sectors has blurred significantly. Already the federal government is acting on a new model of cyber-resiliency, around Zero Trust, to modernize the nation’s cyber defenses in partnership with the private sector. As companies like Mandiant/FireEye and Tenable show, military veterans have a track record of success in the cybersecurity market. Yet as a demographic they remain broadly underserved from a capital perspective. More venture firms need to appreciate the diverse perspective that veterans can bring, and bridge this gap to unleash their potential.

Sources:

 

Emerging Themes in Cyber Security – 2021

By Renny McPherson and Dr. Josh Lospinoso

 

The cyber security landscape is evolving rapidly as the attack surface for cyber attack grows exponentially due to mega-trends in how people live today: more devices, more digital everything, more open source, more enterprises developing software, and everything digital being connected.

Covid’s work from home mandates have exacerbated the risk. As such, there are many opportunities for startups to have an impact by addressing a new, modern theme or taking a new approach to a long-standing cyber segment such as endpoint protection. Below, we outline eight themes of interest for First In this year. This list is far from exhaustive, as there are many segments within cyber security which present opportunity.

 

The Long Tail

Small and medium-sized businesses (SMBs) are increasingly at risk of cyber attack, and enterprises are more and more vulnerable to supply chain risk from their vendors and partners. Themes that have worked in enterprise are now more necessary, at a lower price point and with more ease of use, to SMBs.

We will devote a follow-on post to the long tail of risk.

 

Data Security

Enterprises generate and retain massive amounts of data. It’s important to secure this data with a combination of filtering, blocking, and remediating techniques. Data security platforms will integrate directly with other data platforms to monitor, provide backups, and ensure compliance. There are a lot of incumbents in this space but we believe this is a growing segment.

We are keeping an eye on data encryption startups who are answering the call for quantum-resilient encryption techniques. While the technological problem clearly exists, companies are still working to find viable business models for their technological solutions.

We believe that data vaults are an investment opportunity in this space. If service providers host highly secure data and expose it as a service to customers, they can neatly solve several pain points at once. These so-called “data vaults” transfer risk to the service provider.

Major players in this space include Very Good Security, Evervault, and Skyflow.

 

Application & Composition Analysis

COVID-19 exacerbated the pressure on technology organizations to integrate security into multiple phases of the software development lifecycle. Over the next several years, teams increasingly will integrate security into their build phases. Startups in this space will offer tools to detect vulnerabilities in software dependencies and perform software composition analysis.

Major players in this space include Sonatype, Snyk, Whitesource, and Micro Focus. Phylum is an upstart taking a next generation approach. Rather than match known vulnerabilities against open source package versions, Phylum ingests terabytes of open source code and performs analysis to find unknown vulnerabilities, identify dependency risk, and mine for malicious activity. Earlier this year, First In led Phylum’s seed stage financing.

 

Application Security Orchestration and Correlation

While application security is a burgeoning industry, we believe there will be a major growth in the amount of tools available to enterprises. These tools will require integration and correlation. As this market will likely be fragmented, there will be startups rising to integrate the complementary solutions and improve end-user experiences. This market is poised to break out.

Emerging companies in this space include Code Dx and ZeroNorth.

 

Cyber Insurance 

Cyber insurance is still in its early days, with major insurance providers finding their footing in this essential market. In the race to maturity here, look for more news such as the recently announced partnership between Google, Allianz and Munich Re. With breaches rising every year and cybersecurity spending rising yearly too, a risk transference mechanism is necessary. Large insurance companies are not as well-suited to copy-pasting life insurance actuarial tables onto the cyber risk paradigm. As a result, this is a ripe market for small, nimble companies with strong risk assessment chops, to stand out.

We believe that a core problem in cyber insurance is information. Insurers simply have a difficult time quantifying risk. The insured, especially small and medium sized businesses, want to mitigate what they can and transfer the rest without thinking about it too much. We believe there’s a large market opportunity for companies to address both issues at once. By pairing cybersecurity assessments with insurance, the same entity can perform a service to the SMB (cybersecurity risk) and more accurately understand what they’re insuring. Finally, it becomes possible to price cybersecurity mitigations based on how they impact insurance premiums.

Emerging companies include Coalition, Cowbell, and Trava.

 

Unifying Security in the Cloud: CSPM, CWPP and GRC

As containerization permeates everything, cloud workload protection platforms will become essential additions to cloud access security broker offerings. This is a hot space with recent acquisitions by Palo Alto, McAfee, Cisco, CheckPoint, and Fastly. As Kara Nortman of Upfront Ventures hypothesizes, the “Rise of the Multi-Cloud” will be a core driver for cybersecurity tool demand. While 93% of enterprises intend to use a multi-cloud strategy, cybersecurity products aren’t built for a cloud-first world.

Caveonix created a single integrated platform for automated compliance, cloud security posture management (CSPM), cloud workload protection (CWPP)and governance in a hybrid and multi-cloud environment. First In led Caveonix’s $7M Series A in Q4 2020.

 

Identity and Access Management

Identity and access management manages permissions across an enterprise. It helps customers manage employee and customer identities and ensures privacy preferences and access provisioning safeguard sensitive services and data. This is a large and growing market

We believe that there’s a major opportunity for players to develop better rules management for IT and security teams. Currently this is an error-prone and labor intensive process.

There continues to be a major opportunity for evolving beyond passwords and multifactor authentication. Based on behavioral analytics and the device used for access, there are possible replacements such as Zero-Factor Authentication.

Major players in this space are Beyond Identity, Forter, Mati, JumpCloud, and Alloy.

 

New Approaches to Endpoint Security

Endpoints are remote devices that provide services and process data. These devices, like computers, phones, network gear, and servers, remain critical. This is a very established and large segment in the information security field, and we view this as very difficult for new players to penetrate as it is such a crowded field.

However, there are some subsegments that offer opportunity. Internet of Things and Operational Technology, for example, represent a new frontier of cybersecurity that we believe represents a huge opportunity.

We believe there’s opportunity in the Extended Detection and Response (XDR) space. This represents a potential next generation of endpoint security, where detection and response are automated. Startups with a superior product could challenge increasingly outdated antivirus solutions, and labor-intensive security information and event management software incumbents.

 

The Rise, Ubiquity and Vulnerability of Open Source Software

By Renny McPherson, managing partner of First In and Matthew Dulaney, director of operations (summer associate) at First In.
 
Open source software is ubiquitous today. That’s a good thing. But it wasn’t always clear open source would win. Reviewing the history of proprietary and open source software development can help us understand how open source became so widely used and how open source software came to be both incredibly valuable to the world, and incredibly vulnerable as a threat vector for cyber attacks. 

Let’s start with the brief history. In the early 1970s, universities and research organizations were the only institutions with the resources and demand to purchase computers with sufficient functionality to be usable. MITS (Micro Instrumentation and Telemetry Systems) changed that with the Altair 8800 microcomputer, which began to bring computing mainstream. Bill Gates and Paul Allen designed a BASIC interpreter for the device, fuelling both Altair’s sales and the success of their own company, Microsoft. By 1976, BASIC was used by most Altair owners; however, few owners actually paid for the software. Instead, many people loaded friends’ purchased copies onto their machines.

Bill Gates wrote an “Open Letter to Hobbyists” in 1976, and accused users of stealing their software. The letter was an assault on the software development community, who embodied what would now be considered open source values of decentralization and open information sharing stemming from the early days of computing. “Hobbyists” would riff off of Gates’s code and share their own versions for free — other developers would take those modified editions and further adjust the code, spreading a network of free software based on the original code written by Microsoft. Gates condemned code sharing, instead advocating professional, paid software development.

Proprietary software — which can be called “closed-source” software, as opposed to open source software — dominated the 1980s and much of the 1990s. Software was sold attached to hardware products, and users could not access or modify source code behind their products. Microsoft, after Gates’s “Open Letter to Hobbyists,” continued to criticize the principles behind open source. 

Meanwhile, an MIT computer programmer, Richard Stallman, was inspired to establish a free operating system, GNU, in 1983. Stallman had programmed printer software to notify users and to pause printing when a printer was jammed. When a new printer arrived with closed-source software that inhibited his ability to program the printer in the same way, he created and began to build a new operating system. He took it further. Stallman quit MIT to continue developing GNU, and his strict adherence to free software is codified in the GNU General Public License, or GPL. The GPL prevents open source software developers from maintaining exclusive rights to their software or charging others for its use. Moreover, the GPL prevents users of GPL-licensed software from placing restrictions on or monetizing software they develop using other GPL-licensed software.

In 1991, Linus Torvalds created Linux with GNU’s tools. Linux — a portmanteau of Unix and his first name — is, strictly speaking, a kernel of an operating system, providing developers extensive flexibility to write programs that meet their needs. Licensed under GNU’s GPL, Linux is steeped in open source orthodoxy: users can freely use and alter the OS’s code, but in doing so must publish their modifications and projects for others to access. 

While Linux has had a massive impact in the history of open source software development, its early success was limited. An initial point of contention was rooted in doubt that a mass of amateur, part-time coders could effectively and consistently create usable software. Another roadblock was Linux’s complexity compared to meticulously developed alternatives.

Linux’s popularity exploded for those willing to spend time untangling its complexity in return for its power. Soon enough, companies like Red Hat began to develop Linux-based toolkits, which allowed developers to harness Linux’s vast functionality. Red Hat built a careful business model to take advantage of open source without monetizing open source software per se. They would assemble open source code, polish it, and release two versions: a free version with just the source code, and a paid version which included the source code and how-to guides, serial numbers, and customer support for the software. They appeased hardcore open source developers, offered prices that were a fraction of the competition, and made money along the way. Its popularity surged.

Linux’s burgeoning success converted many previously hardline anti-open source software developers. One such convert is Eric Raymond, who published his experience with open source in The Cathedral and the Bazaar. Raymond initially believed proper software development necessitated careful design by small teams, with no early beta release. He was a convert and stated Linus Torvald’s genius was to “release early [and] release often” (p. 29) and treat users as co-developers (p. 27). He also debunks the claim that open source software is inherently inferior to proprietary alternatives: “Quality [is] maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers. To the amazement of almost everyone, this work[s] quite well.”  

Raymond’s essay caught the attention of executives at Netscape, maker of the popular Navigator web browser. Soon after Raymond’s essay was published, the company decided to release the source code for Navigator 5.0, kicking off the Mozilla project. This gave further legitimacy to open source. From the Mozilla project’s ashes, developers at AOL (who acquired Netscape) created a sleeker version of the browser called Mozilla Firefox in 2004. Firefox challenged Internet Explorer’s dominance, and Firefox had 100 million downloads within a year and a half and 1 billion downloads by 2009. Where Navigator 5.0 was a jumbled mess of features and code, Firefox was sleek and user-friendly. 

As Firefox grew in popularity, Linus Torvalds himself was advancing another key pillar of open source software development: Git. Git allowed developers to track revisions and easily implement source code changes, bringing transparency and elegance to the previously clunky version control scheme. Git’s tools were consolidated in 2007 with the advent of GitHub, a free repository of open source code. The current GitHub workflow begins with branching, where developers essentially create a new environment where they can tweak code without directly impacting the master branch. In the new branch, developers “commit” new code to the existing project, adding and testing new features separate from the core project. Commits also track developments, allowing project owners to understand from whom changes came and reverse progress if bugs are discovered. Developers solicit community feedback with Pull Requests, then, once satisfied, deploy a branch by merging it with the master project.

GitHub facilitates open source development by tracking developer histories and allowing developers to establish reputations for their contributions in GitHub. The branching and merging process addresses version control, while profile tracking makes developer histories transparent and allows software owners to better evaluate incoming changes to their code.

Open source completed its rise, ubiquity, and became an official part of the mainstream when Microsoft purchased GitHub for $7.5 billion of Microsoft stock. The acquisition marks a stark turnaround in sentiment from Bill Gates’s Open Letter, and from the early 2000s when then-CEO Steve Ballmer called Linux “a cancer”. If Netscape’s embrace of open source in 1998 offered credibility and allowed corporations to follow suit and consider similar adoption, Microsoft’s acquisition solidified open source as the dominant software development ethos. GitHub plays host to major corporations’ source code, including Facebook, Amazon, and Google, and continues to be the default for software developers worldwide. Per CNBC, “The success of open source reveals that collaboration and knowledge sharing are more than just feel-good buzzwords, they’re an effective business strategy.”

Software developers of all kinds — from tech giants to amateurs — continue to rely on open source code in software development, allowing developers to harness others’ ingenuity to create high quality programs. However, open source remains imperfect, as demonstrated by higher volumes of software bugs and vulnerability to cyber attacks reported in recent years. Notably, Google in 2014 disclosed the now-infamous Heartbleed bug in OpenSSL: over 500,000 websites used OpenSSL’s Heartbeat (the program afflicted by Heartbleed), and thus were vulnerable to attack. Companies at risk ranged from Twitter to Github to the Commonwealth Bank of Australia. This incident highlighted a crucial vulnerability in open source. Essential programs, like OpenSSL, are imperative to the success of major companies and projects, but lack security oversight.

Experts agree: “Windows has a dev team. OpenSSL these days is two guys and a mangy dog,” says Matthew Green, assistant professor at Johns Hopkins. Writes Chris Duckett of ZDNet: “In years past, it was often the case that businesses took the view that all that was needed was to drop source code on a server, and the community will magically descend to contribute and clean up the code base. Similarly, users of open source software wrongly assume that because the code is open source, that an extensive review and testing of the package has occurred.” “The mystery is not that a few overworked volunteers missed the bug,” says OpenSSL Foundation former President Steve Marquess, “The mystery is why it hasn’t happened more often.”

Today, given how open source evolved, it is no one’s specific job to secure it. Code with open source dependencies relies on potentially thousands of developers, exposing software developers to upstream attacks from bad actors writing malicious software into open source packages that are then included in other projects. 

Humanity is reliant on open source software. It’s time to ensure security is present in building and reviewing open source code. Large enterprises are starting to pay attention to this burgeoning vulnerability and agree this problem needs to be solved. New companies are being created in real-time to address this gap, in order to ensure open source can continue to provide extreme value – without today’s vulnerabilities – for everyone.  

 

Sources: 

Why Technical Debt Matters to Business Executives

“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.”

— Ward Cunningham, 1992

 

The concept of debt is fairly straightforward: you can pay now, or you can pay later with interest. Any CEO knows their company’s debt load or can ask the CFO when they need a reminder. Yet, even though those same executives understand that software and digital strategy are key to the long-term success of their companies, the concept of technical debt (TD) is still not well understood outside of IT circles, despite this year being the 25th anniversary of Ward Cunningham coining the term. Technical debt is traditionally associated with software development, but it is applicable to any discipline where technology is being used to solve problems and glean insights. Like real debt, TD can be simple debt or it can accrue interest in the form of increased maintenance, complexity, lack of flexibility, and manual labor.

Over the years, we’ve lived through the decisions and consequences of taking on technical debt. Below, we describe why technical debt occurs and provide our views on how to manage it.

Why technical debt happens

In and of itself, technical debt is neither bad nor good — it is a choice. Like financial debt, it can be used as a tool to provide leverage to address a lack of time, resources, or information. Unlike financial debt, however, frequently it is the people towards the bottom of the org chart who are empowered to make the decisions as to when and how much to take on. But these decisions have broad consequences for how a company pursues its goals, and executives need to play an active role in shaping their TD strategy.

For high-functioning teams, technical debt is the result of doing a cost-benefit analysis. For example:

  • A software team may work with product owners to decide that it is better to get a product, update, or feature out and into users’ hands, knowing they’ll likely have to tune it over time, than to attempt to build it perfectly from the start.
  • In order to fix a production issue, an engineer may deploy a brittle, stopgap fix in order to buy the team time to diagnose and develop a better long-term solution.

For other teams, TD is not a deliberate choice. They can create TD due to shortcomings in leadership, documentation, skill level, collaboration, and process. For example:

  • Business executives may create TD by requiring constant changes, not planning well, and over-promising custom deliverables to clients and stakeholders. In order to meet their deadlines, developers take shortcuts and are not allotted time to remove legacy code. If left unchecked, the team will be left with a complex, fragile code base filled with vestigial components that people are scared to touch because every change results in at least one unintended consequence.
  • Development teams may create TD by not articulating the need to address TD in a timely fashion. Often developers make the choice to incur TD in order to make up for underestimating a task and then fail to follow up with product owners to properly prioritize and schedule paying it down.

Technical debt tends to accumulate naturally over time. In an example from our own company, our original user-interface (UI) had been outsourced, and as we built our own internal engineering team, we took on more ownership of the UI. At the same time, we were also a typical early-stage product company that was developing features as quickly as possible and innovating as we went. After a few months, the team broached the topic of paying down technical debt with the CEO. We didn’t want to just address the debt, we essentially wanted to start over and rewrite the UI. Why? For one, the UI had become fragile as we built out more and more features. Simple changes required more effort and more testing. The current implementation was also lacking several large, critical features that were going to require significant effort to implement. We saw the UI as more or less a prototype, and recognized its shortcomings as a stable platform.

We presented to leadership the three choices: (1) continue doing what we had been doing; (2) address the debt in the existing UI; or (3) come up with a plan to transition to a new, 2.0 version. As a team we discussed the implications of each. Everyone agreed that the status quo was not really an option, and we didn’t consider that for long. It came down to which would be better: trying to fix a fragile system or starting fresh with a deeper understanding of where we wanted to go.

Before we could make this decision, however, we needed to consider more than just the technical justifications. A major consideration was the impact on both our existing clients and prospects. In the end, we worked out a plan that enabled us to satisfy both our business and technical needs. We agreed that we’d freeze all feature development on the existing UI and only take on TD to work on critical patches for it. In parallel, the team would begin working on 2.0.

None of this was easy; it required compromises by all parties. Throughout the execution of the plan, we met and discussed the TD, opportunity costs, and risks of making or not making each change to the original UI. Needless to say, there was friction, but the timing was right. Had we waited much longer, we would have lost a window of opportunity; had we started much sooner, we would have had fewer insights into what our 2.0 release really required to be successful.

What to do about technical debt

When you’re facing down a backlog of technical debt, with all of its complexities and follow-on effects, it can be hard to see the forest for the trees. To address technical debt head-on, we take a three-phase approach: assign, assess, and account.

Assign – Executives should make engineering and technical leaders responsible for measuring TD, just as a CFO is responsible for reporting financial debt at executive briefings. At a monthly executive meeting, an engineering leader should be able to brief the executive team on how much TD there is so the team can best plan for resource allocation. You should also make sure the services team that implements solutions specific to client demand is working closely with product and software developers to implement generalized solutions to those issues. The goal is to empower your teams to move toward a comprehensive, quantitative understanding of the company’s TD.

Assess – To facilitate this move away from scattered, anecdotal evidence of technical debt, companies need to create a way for their teams to measure and discuss TD. Most engineering teams are probably already tracking technical debt in their issue tracking system. If not, they could start somewhere simple like a wiki page. For planning, transparency, and forecasting, it’s important that these measurements do not stay siloed in your IT department. Business executives should schedule regular updates on TD from the technology team with an understanding that it is a natural consequence of how the organization operates as a whole.

Account – Plan for and schedule technical debt payments in two ways. First, leave some room in every development cycle (aka sprint) for developers to address TD they encounter. Second, know that some TD is too large to just be swept under the rug or cleaned up when encountered. As you learn when and why to adjust away from quick fixes toward permanent solutions, you will see the value in scheduling larger efforts to address TD into your delivery schedule.
Companies large and small devote significant portions of their budget to IT and do not have a firm grasp on the debt that they accrue. Knowing and managing versus not knowing technical debt levels will separate top performers from the rest as large enterprises and SMEs become increasingly dependent on technology for competitive differentiation.

Renny McPherson wrote this with Mike Gruen. Renny and Mike were colleagues at RedOwl Analytics, where Mike was VP Engineering and Renny was a co-founder. RedOwl Analytics was acquired by Forcepoint (Raytheon) in 2017.