The following article is based on a webinar series on enterprise API security by Imvision, featuring expert speakers from IBM, Deloitte, Maersk, and Imvision discussing the importance of centralizing an organization's visibility of its APIs as a way to accelerate remediation efforts and improve the overall security posture.
Centralizing security is challenging in today's open ecosystem
When approaching API visibility, the first thing we have to recognize is that today's enterprises actively avoid managing all their APIs through one system. According to IBM's Tony Curcio, Director of Integration Engineering, many of his enterprise customers already work with hybrid architectures that leverage classic on-premise infrastructure while adopting SaaS and IaaS across various cloud vendors.
These architectures aim to increase resilience and flexibility, but are well aware that it complicates centralization efforts' to: 'These architectures aim to increase resilience and flexibility, but at the cost of complicating centralization efforts In these organizations, it is imperative to have a centralized API location with deployment into each of these locations, to ensure greater visibility and better management of API-related business activities.
The challenge for security teams is that there isn't one central place where all APIs are managed by the development team - and as time passes, that complexity is likely to only get worse. Moreover, this complexity doesn't stop at the infrastructure level, but carries on into the application layer.
Deloitte's Moe Shamim, Senior Technology Executive and Deputy CISO of US Consulting, sees non-monolithic application development as key. He claims that organizations must now break down those millions of lines of code into API-based, modularized processes and systems in order to remain competitive, all while ensuring that threat vectors are kept down to a minimum. This requires significant rethinking as one must now account for API gateways, IAMs, throttling and more, which means significant time and resources.
The API footprint of organizations is no longer increasing organically over time. It now consists of various APIs whose origins come from mergers and acquisitions, versioning, internal APIs, 3rd party APIs, drift from original intended usage, dev, test, debug and diagnostic purposes and so on. This makes complexity an even bigger issue, as many APIs are undocumented and unmanaged, and needless to say - unprotected.
Where do 'Shadow APIs' come from? |
Enforcing a consistent program across each of the different environments where enterprise assets are located is a challenge in this hybrid cloud reality. One should take this consistency challenge into consideration when selecting technology stacks, so that enforcing policies and governance programs everywhere is not an issue.
But this is easier said than done, especially in successful enterprises that merge with and acquire other organizations: each business uses different technologies, mandating a customized, bespoke API security process for each new environment that's added.
Here's what you should pay attention to when evaluating a full lifecycle API security solution
API lifecycle? API lifestyle!
According to Moe Shamim, the API lifecycle can be boiled down to the pillars found in the image below. When fashioning an API security strategy, one must take into account architecture, distribution, design and a whole slew of other aspects that impact the way an organization develops its approach to APIs. You can look at each of these aspects as controls you inject at every stage of the API lifecycle. And it essentially ties back to visibility and centralization discussed above.
An image of API lifestyle pillars |
Planning determines issues like whether APIs will only be used within the network firewall or publicly, as well as issues like authentication. It'll also touch upon more technical issues such as builds, gateway types and the programming languages that you'll use. The important thing--and this goes for every decision you make regarding your security posture--is to make a choice that aligns with your ecosystem of tools, and takes your threat modeling into consideration.
In the Build pillar, scanning for OWASP Top 10 issues is a must, and SAST tools are great for that. Pentesting and versioning may not necessarily be integrated into your security posture, but they're both powerful mechanisms that will surely benefit your security arsenal.
The Operate pillar includes issues like throttling, caching, and logging. A robust logging and monitoring mechanism is a must-have in the remediation phase, as it enables you to fix vulnerabilities from version to version.
Last but not least, we arrive at the Retire pillar of the lifecycle. Removing endpoints that are no longer in use is an essential best practice; basically, if you no longer need a service - don't leave it on. And if you don't need an API at all anymore, just take it offline; the same goes for cloud accounts.
Tony Curcio claims that one of the key tenets in the governance of API programs is coordination between the API producers, product management, and consumers. Looking at the security disposition of each of those personas and coordinating API policies that ensure secure use for each is a fundamental aspect of an organization's security posture.
Having an API-first mentality within the organization definitely helps. At IBM, for example, they build their own API management technology that enables them to expose, secure, and protect their APIs more easily. Having advanced technology behind you--like Imvison--also goes a long way. Their AI technology helps us understand more about attack vectors, including critical issues like its source.
Taking an intelligence-led security response approach
Gabriel Maties, Senior Solution Architect at Maersk, offers another perspective. With Maersk being three years into an API program and following a serious breach, cybersecurity is taken into account constantly as a way to stay at least as good as the attackers, if not better.
Sharing his perspective on observability, Gabriel sees API management as a multi-actor discipline from the very beginning because it shares resources and exposes them internally. Therefore, each and every point of entry into your system and its supporting mechanisms should be carefully observed and monitored centrally.
This centralization is important because observability is multidimensional in the sense that there's never one single aspect to monitor. This calls for a holistic view of APIs that enables you to easily understand where APIs are deployed, who owns them, who consumes them, how they're consumed, what normal consumption looks like and how each one is protected. Centralization also enables you to understand better what each API's lifecycle looks like, how many versions exist, what data is shared, where it's stored and who's using it.
Centralization is the only way to manage this complex ecosystem in a way that ensures maximum benefit and minimum risk.
An image of API visibility layers |
Having centralized observability further enables insights, which allows you to take action on your observations. Observability allows you to look at ongoing, active attacks that you may not even know about and even formulate strategies that leverage the actions taken upon the insights you draw from your observations.
Rule-based security is highly effective, and machine learning and deep learning are two technologies that automate and streamline it. There is simply no other option as the amount of data to contend with is overwhelming, not to mention that these technologies enable adaptive threat protection that helps contend with new threats.
The bad news is that hackers are also using these same technologies, and dealing with that requires significant organizational maturity to take the actions required to handle that. We're talking about some heavy-duty actions here, like turning off load balancers, switching over firewalls, and other infrastructural changes done in an automatic, rapid-fire fashion. This cannot be done without a high level of maturity across the organization.
Supervised machine learning can help organizations develop this maturity. It enables you to handle huge numbers of rule sets and insights so that you can design automatic action flows. Data science offers significant know-how in terms of tracking specific attacker behavior, which is critical when there are different sources and advanced, persistent threats.
This intelligence-led security response empowers a continuous adaptive, reflexive response that leans on quantified evidence when changing and updating rules and processes. This is the only way to deal with the increasingly sophisticated attacks we're seeing.
The screens went black: A real-life attack story
Gabriel talked about a real attack that he experienced while working at the Maersk. One day, about nine months after he joined, their screens went blank. Disconnecting and unplugging actions didn't help, it was already too late and within minutes thousands of computers were rendered useless.
This was not an attack for financial incentives, but rather a destructive one meant to bring the Maersk to its knees. Gabriel and his team's only choice was to rebuild, as the attackers used one-way encryption. Obviously, while rebuilding the system, cybersecurity was a major priority. Dynamic analysis was considered paramount to their efforts so that they could perform real-time analysis to empower ongoing learning and threat adaptation. Their goal was to learn what normal and abnormal internal behavior looked like, as 80% of attacks are internal.
Following the attack, Gabriel came up with 4 levels of observability, health checks and a way to determine whether a system's health has been compromised. All processes and architecture decisions were now forced through cybersecurity assessment and must pass a number of checks and balances. This doesn't mean that all the boxes need to be ticked to get a new process or decision approved, because the main point here is to drive knowledge of your gaps and weaknesses so that you can leverage the right capabilities and vendors for your security philosophy.
Over the last 2 years we've seen a growing trend of organizations adopting specific API tools that help monitor, discover and unsettle shadow APIs to better understand their risks. This is a great development, as APIs are totally different from the application world we came from. The only way to protect APIs is to adopt unique tools and processes that were built specifically for them.
API security: Getting the board onboard
The proliferation and severity of cybersecurity attacks in our landscape are making the boards and executives of many enterprises take more interest in API protection. Increased visibility is another way to get execs to understand the risks they're exposed to. If you can find a way to show your execs how much-unprotected data is at risk easily, you've won half the battle.
This visibility will, in turn, empower a more adaptive, reflexive cybersecurity posture that will enable you to continuously learn, draw insights and modify your posture in response to new types of attacks.
Developing a consistent, visible security posture across all of your enterprise assets is a central tenet to any robust cybersecurity strategy. This security posture must take into account the four pillars of the API lifecycle: Plan, Build, Operate and Retire. To do that correctly, you've got to choose the technologies that will enable you to enforce the policies, tools and governance that you decided upon when starting out on your API security journey.
Of no less importance is developing a holistic, centralized strategy that empowers the visibility you need to protect your assets. Advanced ML and Deep Learning technologies delivered by innovative companies like Imvision can definitely help you achieve that.