“You should open source your software.”
Q: who then will take responsibility for its maintenance?
About 85–90% of software’s total cost of ownership is maintenance, which the inexperienced tech enthusiast tends to overlook.
They overlook total cost of ownership because acquisition is frequently managed within an organization in a manner compartmentalized from maintenance, and/or maintenance “just happens” and most people (such as consumers vs professional technology managers) are simply unaware, even though they dutifully update their apps.
And so they read a few naïvely-enthusiastic articles about the “semantic web” and will arrogantly dismiss caution regarding maintenance, to their detriment.
This issue is exasperated by similarly-inexperienced programmers who like to build things, but avoid doing maintenance because they think maintenance is boring or unnecessary.
In terms of temperament, programmers who like to build shiny new (often unnecessary) things abhor maintenance, so they downplay its importance.
These harmful dynamics magnify within the cultures of certain organizations (example: those heavy on marketing and sales, and skinny on sufficiently-experienced engineering).
The manifestations of poorly-maintained software are inclusive to stability (crashes), performance defects (slow apps), and are one of the most consistent reasons why security “backdoors” or “exploits” become available.
I would observe that “too few understand how to respond properly” because too few are both knowledgeable and sufficiently experienced with something formally referred to as the “software product lifecycle.”
The reason too few are both knowledgeable and sufficiently experienced is because there’s almost no reason for it: modern software use takes place within the protective confines of air-tight indemnification clauses (articulated within the software licensing agreements that almost nobody reads), which all but ensures that the vendor will never be held accountable for these oversights.
And the few negative impacts that DO manifest are often mitigated through liability insurance.
I know what I’d do if asked to triage this situation while defining a mitigation plan, calibrating in-the-trenches execution to risk-based status reporting at the executive/board level aligned to organizational KPIs.
For instance, there’s technology that will automatically scan and identify which software products an organization possesses with a certain identified vulnerability.
One such product is generally referred to as application vulnerability management, and in particular the static version would be sufficient for most to quickly scan the code an organization is using to zero in on which is relying upon this defect (speaking of the Log4j defect).
In terms of communicating status to the executive suite, the National Institute of Science and Technology (NIST) features something called a “cyber-security framework,” which provides high-level framing on how to quickly help non-technical executives understand the category and status of a particular situation.
What’s great about the cybersecurity framework is that it’s also a “cross walk” to other regulatory compliance frameworks, something like a Rosetta Stone.
So if you’re in the retail industry and your executives are uptight about your organization’s PCI compliance (has to do with handling credit card transactions), the NIST cybersecurity framework will help translate into the specific PCI requirements so corporate risk officers can make a quick and detailed-enough assessment of what actions are required.
If you’re in the nuclear industry, such as with an energy provider, you can likewise map the NIST cybersecurity framework to NERC, which is that industry’s set of compliance requirements.