Healthcare will continue to be one of the top verticals at risk due to the value of sensitive data. Security hygiene is like your immune system: bad habits can lead to the breakdown of your immune system and greater susceptibility to viruses. Likewise, in cybersecurity, bad practices can lead to the breakdown of your security hygiene and greater susceptibility to data breaches.

Healthcare has become a highly targeted field because of the high value of protected health information (PHI). The Health Insurance Portability and Accountability Act (HIPAA) regulates data privacy for health information and mandates specific processes to best protect health data. Because of this, proper healthcare security hygiene practices are central – failure to implement them can lead to massive fines, loss of reputation and trust, and lawsuits from clients or patients.

Based on recent data breaches and levied fines, below are some of the most important healthcare security hygiene fails of 2019.

  1. System Misconfigurations and Vulnerabilities
  2. Failing to Encrypt Devices and Drives
  3. Unauthorized Users
  4. Compromised or Blank Passwords
  5. Storing Protected Data in Public Servers

Read the full post

In the previous parts of this article series, I had mentioned that we are the advent of a new era of glorious distributed computing of many kinds, and therefore infrastructure has to be reimagined to meet the evolving needs and challenges.

To close out the series, let us summarize the ways that infrastructure itself and applications are being decentralized, thereby driving needs to evolve infrastructure (compute, network, storage, security, and orchestration) to meet the resulting challenges.

Each of the above trends, described in the earlier parts of this article series, engenders a set of new challenges for application delivery, performance, and security. On top of those trends, we also have accelerating application functionality trends and end-user trends that are driving remote and distributed applications and data delivery.

Here are some of the application functionality trends to note:

  • Rich media communications with the use of real-time video for conferencing is increasing exponentially
  • Healthcare delivery is combining wellness and vital monitoring with video conferencing to optimize outcomes
  • Internet of things is being combined with machine learning and AI for increasing levels of optimization and even zero-touch automation
  • Automobiles are becoming increasingly super-connected entertainment and commerce centers on wheels, and are also on the way to complete self-driving to increase utilization of those services

Here are some of the end-user trends to note

  • Increasingly most application and content consumption is from mobile devices
  • Also, such consumption is being done while the users themselves are moving (including connected cars)
  • Ad-hoc/Over-the-top entertainment consumption (streaming, games) is increasingly prevalent

Such application and end-user trends also interact with each other and with the three distributed trends we mentioned earlier (distributed apps, edge compute, and distributed trust) to compound the challenges on cloud infrastructure.

With all these challenges converging, we cannot stay on course for the incremental evolution of infrastructure. We need to radically re-imagine infrastructure to deliver innovative solutions to address the needs of responsive user experience, immersive application experiences, and data security and sovereignty delivered by a myriad of cloud and edge based microservices.

Given all of the above, I am very happy to report that we are, in fact, on the way to the delivery of such solutions. Companies co-created by The Fabric in the past have already delivered on some of the solutions that are needed (eg SD-WAN pioneered by Velocloud), and more such solutions are on the way, spearheaded by recent company co-creations that we have announced.

Please visit this page of The Fabric portfolio to read more about how companies co-created by The Fabric are revolutionizing infrastructure by reimagining for this dawning distributed era.

We will have more on this subject when we report on the outcome of the discussions amongst industry luminaries at The Fabric’s invite-only Annual Summit on October 17th in Palo Alto.

Watch this space!

Picture this. A hacker uses a VPN to breach into a cloud server (virtual machine hosted within a tier-1 public cloud provider) of a large financial enterprise, through a misconfigured firewall, then executes a small set of commands (injection attack), which gets her the credentials. She then uses the credentials to send authenticated requests to the cloud’s storage environment, and extracts hundreds of millions of credit card applications and account information (data exfiltration attack), carrying all kinds of sensitive data such as PII (personally identifiable information) and PI (personal information).

Sounds all too familiar and real ? That’s because it is a real breach. Interestingly this is a breach of an Enterprise that is well known to take data security very seriously.

What’s most alarming about the story, is that the hacker went undetected (despite all the monitoring and detection tools in place), and she had to brag about her adventures on GitHub/Slack for people to take notice, and only after a tip off did folks realize that the breach had occurred and took remediation actions.

Now, the purpose of highlighting this application breach is not to malign any particular Enterprise, but to learn from it. All too often we make the mistake of assuming that our environment is safe and not vulnerable to hacks and breaches, whereas it actually is quite exposed and hacker-friendly. In fact, over the past few years, we have seen numerous application breaches of credit reporting agencies, local search services, hotel chains, gaming companies, postal services, etc. and interestingly many of them were highly un-sophisticated data breaches. These obviously, negatively impact not only the businesses because of regulatory fines ($700 million in one case) and cost of damage control, but also the end customers, who have to deal with the aftermath of losing their sensitive data to the wrong hands.

Current State of Application (and API) Security

 So why has application security become of such paramount importance now more than ever before, and why are we seeing a spate of application data breaches in the recent past ?

When you look at Enterprise Data, the crown jewel that everyone is trying to protect, there are various touch points or ‘data access doors’, if you will, to it from various sources. Starting with employees who directly access data using their removable media, to end-point and IOT devices, to cloud and SaaS workloads to legacy web apps — and now with the increasing use of modern and distributed applications — everywhere data is being accessed, handled, and most importantly transferred across heterogenous and distributed environments. And while there are multiple security solutions to protect the other so-called ‘data access doors’, when it comes to modern applications and specifically when it comes to application data-in-motion (APIs being a small subset of that), there aren’t comprehensive security solutions that can monitor the environment and protect against bad actors (As a side note, of late, we are seeing the emergence of a few perimeter-based API security solutions). And with applications evolving rapidly from monolithic to distributed, the number of APIs or application data-in-motion interactions have just increased exponentially, thereby making the issue of security even worse and imminent.


Given all this, therefore, it should come as no surprise that security experts believe that “API is the next big cyber-attack vector” or if said more generically, ‘application data-in-motion is the next big cyber-attack vector’.

Learnings, Best Practices and Recommendations

Let’s face reality. No environment is completely safe and foolproof. So the least we can do is learn from other’s mistakes and better prepare ourselves to protect our environment against these kinds of breaches. Here are, in my opinion, the top 5 learnings from these recent data breaches, and best practices and recommendations to keep in mind.

1. Manual configuration is prone to human errors. Even the best of the best make errors, if they have to configure devices manually. Case in point is the above example data breach, where the hacker leveraged a misconfiguration to penetrate the environment.

Best Practice / Recommendations:

It’s always recommended to do away with manual configuration, wherever possible. Often times, there are solutions that require policy configurations to be manual, which creates a huge overhead and risk, especially when the number of components is unmanageably large — which is true for cloud-native workloads and environments. So it is advisable to look for solutions that do not require admins to configure policies manually, and instead, the system recommends what configurations and policies to put in place.

2. Perimeter breaches are inevitable. Whether it is through a misconfigured firewall, or an issue in the API server, or a vulnerability in the infrastructure (e.g. Kubernetes CVE-2018-1002105), perimeter (aka north-south) breaches are bound to happen, and when that does, how well you have protected your internal environment determines whether your data is breached or not.

Best Practice / Recommendation:

Investing in security solutions that focus on the east-west and insider attacks in addition to north-south is, therefore, a must. Ensure that your security solutions offer distributed security and policies, and that each workload granularly secures itself in a zero-trust manner. Often times though, ‘zero-trust’ is confused with just encryption (Mutual TLS). What we need to remember is that ‘Encrypted’ does not mean ‘Secured’. Although encryption raises the bar, hackers will use the encrypted path as the transport to breach applications and data. So, it is strongly recommended to invest in security solutions which are not only distributed but also deep within the data-layer (as opposed to just network- or URL-layer).

3. Modern distributed applications (and APIs) offer a path of least resistance for hackers. As a result, not only are applications and API breaches on the rise, but many of the attacks are also quite simple and unsophisticated.

Best Practice / Recommendation:

Whether it is your public APIs to partners, or your distributed east-west APIs or even your egress APIs to third party vendors, it is strongly recommended to have a comprehensive API security (or more generically data-in-motion security) strategy in place.

4. Post-authentication hacks using stolen credentials are quite common. “A scan of billions of files from 13% of all GitHub public repositories over a period of six months has revealed that over 100,000 repos have leaked API tokens and cryptographic keys, with thousands of new repositories leaking new secrets on a daily basis.” Clearly, focus on identity management alone is not enough in a world where such errors are made by novices and experts alike, thereby providing hackers a vehicle to piggyback on authorized sessions and perform data breaches. In the above example breach, the hacker used stolen credentials to get all the sensitive information from the cloud storage.

Best Practice / Recommendation:

While there is a lot of focus on identity and access management, several application attacks such as parameter tampering, etc. occur post-authentication using stolen credentials. It is, therefore, strongly recommended to invest in security solutions that address post-authentication and authorization breaches, especially those that can detect user account takeover using stolen credentials.

5. Real-time Visibility and Detection is key. According to IBM, on average, it takes about 197 days (i.e. 6+ months) to identify a breach. This will only get worse with modern applications which are distributed over clouds and environments. If you look at the above example breach, despite all the monitoring tools in place it’s possible that the hacker would have gone undetected while stealing hundreds of millions of sensitive data, had she not bragged about it on online/social media, four months after she actually perpetrated the breach.

Best Practice / Recommendation:

While there should be an emphasis on protecting, it is extremely important to first detect those potential threats and breaches in real-time. “You can’t protect what you can’t see”, goes the age-old saying in security. Therefore, it is strongly recommended to invest in visibility, discovery and detection tools that first discover your distributed environment (especially the interaction of assets that are in use) and then detects data leaks, attacks, and breaches on those asset interactions in real-time.

Cloud and application technologies have evolved from monolithic to multi-tiered to microservices to serverless functions. At the same time, workloads have gotten smaller in size and become ephemeral, while the number of workloads and the number of interactions between them have grown exponentially larger. As a result, data as we know it, has started residing increasingly in between workloads (i.e. in motion) rather than inside them (i.e. at rest or in use). At the same time, attacks have become deeper in the data layer. Distributed, deep-data-layer, data-in-motion security is, therefore, the need of the hour for these applications.