Temporary Access To CloudSQL

CloudSQL Access Controls

Securing the development environment is a critical challenge for DevSecOps teams that must navigate multiple cloud environments and technologies. To improve collaboration between developers, security professionals, and IT operations staff, we need to provide secure access physical networks and services—which often includes providing elevated levels of permissions for databases such as CloudSQL. Ultimately, you should come away with an understanding of how to securely grant developers increased privileges in their public cloud CloudSQL environments without sacrificing any security posture or control.




Managing Permissions in CloudSQL

This blog post will explore how to efficiently manage secured elevated permissions to CloudSQL, an enterprise database service offered on Google Cloud Platform. With Apono strategies, you can make sure that only those who need it have access to the right information while minimizing both project overhead and organizational risk. Let’s dive in!




Using Apono To Provide Temporary Access to CloudSQL

Your first step in create an Apono account, you can start your journey here.

Follow the steps at our CloudSQL Integration Guide.

Now that Apono is set you can start creating Dynamic Access Flows:

  • Automatic Approval Access Flows – Using admin defined context and pre defined role to provide automatic access to CloudSQL resources.
  • Manual Approval Access Flows – Using admin defined context and pre defined role to provide automatic access to CloudSQL resources.



Using Apono declarative access flow creator you will be able to simply define:

  • Approvers
    • User Group (round-robin)
    • Single User
    • Automatic – Contextual
  • Requesters
    • User Group
    • Single User
  • Resource
    • Single Resource
    • Pre-Defined Resource Group
    • Partition of a resource
  • Duration
    • By Hours
    • By Days
    • Infinite

Example: CloudSQL Automatic Approval Access Flow:

Example: CloudSQL Manual Approval Workflow:

Temporary Access To PostgreSQL

PostgreSQL Access Controls

PostgreSQL is a widely popular relational structured database management system, PostgreSQL authorization is an ongoing process that checks each command, comparing it with the users account role and its associated privileges.




Managing Permissions in PostgreSQL

In the era of DevSecOps, ease of access and secure management of resources is essential to facilitating collaboration among development teams. Providing developers with elevated access to PostgreSQL can be a critical step in speeding up product development cycles while maintaining necessary security protocols. For an organization that has many users accessing different databases, granting individual user accounts exclusive privileges can be cumbersome and overwhelming. With this blog post, we will explore best practices involved in setting up privileged PostgreSQL accounts for developers while protecting core assets from unauthorized or careless use.




Using Apono To Provide Temporary Access to PostgreSQL

Your first step in create an Apono account, you can start your journey here.

Follow the steps at our PostgreSQL Integration Guide.

Now that Apono is set you can start creating Dynamic Access Flows:

  • Automatic Approval Access Flows – Using admin defined context and pre defined role to provide automatic access to PostgreSQL resources.
  • Manual Approval Access Flows – Using admin defined context and pre defined role to provide automatic access to PostgreSQL resources.



Using Apono declarative access flow creator you will be able to simply define:

  • Approvers
    • User Group (round-robin)
    • Single User
    • Automatic – Contextual
  • Requesters
    • User Group
    • Single User
  • Resource
    • Single Resource
    • Pre-Defined Resource Group
    • Partition of a resource
  • Duration
    • By Hours
    • By Days
    • Infinite

Example: PostgreSQL Automatic Approval Access Flow:

Example: PostgreSQL Manual Approval Workflow:

Temporary Access To MySQL

Intro

MySQL is a widely popular relational structured database management system, MySQL authorization is an ongoing process that checks each command, comparing it with the users account role and its associated privileges.




MySQL Access Controls

For many DevOps professionals, managing secure access to the company’s databases is a challenging task. You need to manage user permissions and authentication, as well as inevitable requests for temporary access for staff members and third-party vendors. These requests create an additional burden on your team, but ensuring controlled access to MySQL can be a straightforward process if you know how to do it correctly. In this blog post, we’ll discuss best practices for granting temporary MySQL access in an efficient and secure manner using Apono. We’ll talk about why it’s important that Temporary Access is properly managed, guidelines on which users should receive temporary credentials and how long the temporary credential should remain active for.




Using Apono To Provide Temporary Access to MySQL

Your first step in create an Apono account, you can start your journey here.

Follow the steps at our MySQL Integration Guide.

Now that Apono is set you can start creating Dynamic Access Flows:

  • Automatic Approval Access Flows – Using admin defined context and pre defined role to provide automatic access to MySQL resources.
  • Manual Approval Access Flows – Using admin defined context and pre defined role to provide automatic access to MySQL resources.



Using Apono declarative access flow creator you will be able to simply define:

  • Approvers
    • User Group (round-robin)
    • Single User
    • Automatic – Contextual
  • Requesters
    • User Group
    • Single User
  • Resource
    • Single Resource
    • Pre-Defined Resource Group
    • Partition of a resource
  • Duration
    • By Hours
    • By Days
    • Infinite

Example: MySQL Automatic Approval Access Flow:

Example: MySQL Manual Approval Workflow:

How streamlining access leads to productive development teams

How Streamlining Access Leads To Productive Development Teams

Does your access management hurt your team’s productivity? It does.

How do we know? Let’s look at the data.

Access and productivity in numbers

The average employee has 191 passwords to keep track managing all those different usernames and passwords is a huge time suck. There’s no denying it: having to constantly remember a jumble of passwords is a productivity killer. A recent study found that the average employee spends over 10 hours in their work year simply inputting passwords. Add to that the time required to reset forgotten passwords, and you’re looking at a serious drag on productivity: the estimated cost of lost productivity per organization averages $5.2 million annually.

But it’s not just the time spent managing passwords that hurts productivity—it’s also the time spent waiting for access to the systems and data your team needs to do their jobs. In fact, 66% of employees say they’ve wasted time at work waiting for someone to give them access to something. And one-third of technical employees of IT professionals say that restrictive access causes daily (31.8%) and weekly (32.3%) interruptions in their work.

These interruptions quickly snowball into missed deadlines and frustrated workers. 52% of development teams have missed deadlines due to a lack of access to the needed resources and infrastructure.

For example, imagine this common scenario: a developer needs to access a Kubernetes cluster to work on an application, but they can’t log in. Their manager, who normally provides access, is on PTO abroad with spotty reception. They have no choice but to send a request up the chain manually and hope for the best, which results in losing hours or even days on a project simply waiting for access to a resource.

If this sounds familiar, your company isn’t alone—in fact, 64% of businesses have experienced financial losses due to access management issues. Missed deadlines and extended projects often result from inefficiencies in access management.

And it’s not just the users who are affected: help desk employees spend nearly 30% of their time resetting passwords. That’s valuable time that could be spent on other tasks.

Access and security in numbers

A record number of data breaches in 2021—1,862 to be exact—cost companies an average of $4.24 million each. According to Verizon, 61% of all company data breaches and hacking incidents result from stolen or compromised passwords, and it’s not hard to see why.

When employees lack seamless access to systems, it not only affects productivity but also company security. 

Technical employees need access to do their job well, but they’re not always given that access. To do their jobs, technical teams often find creative ways around the access roadblocks by resorting to methods such as password reuse, shadow IT, sharing credentials, or keeping backdoor access. In other words, when employees can’t get the access they need to do their job, they find ways to get it themselves—even if that means going around company policy.

These workarounds might help the technical team finish their tasks and knock down some Jira tickets in their queue, but it also exposes the company to security risks. A recent study found that 8 out of 10 hacking incidents were due to shared passwords, and even more alarmingly, 53% of employees have admitted to sharing their credentials.

Passwords are proliferating across our digital world and getting stolen in record numbers every year. Consequently, it’s no mystery that over 555 million passwords are accessible on the Dark Web, leading to credential-stuffing attacks that account for a majority of data breaches in recent years.

Streamlined access is key to both productivity and security

The bottom line is this: if you want to improve productivity and security, you need to give your technical teams seamless access they need to do their job.

Now that we’ve established that access is key to productivity and security, let’s look at how you can streamline access and get your team back on track.

That’s where Apono comes in. 

Apono.io is an innovative identity and access management solution that gives your technical teams the access they need — without sacrificing security.

Apono streamlines access by automating the process of granting and revoking permissions, so you never have to worry about manually managing access again. Our technology discovers and eliminates standing privileges using contextual dynamic access automation that enforce Just-In-Time and Just-Enough Access.

Streamlining access also makes it easier to meet auditing and compliance requirements. With Apono, you can see who has access to what, when they accessed it, and from where.

With Apono, It is now possible to seamlessly and securely manage permissions and comply with regulations while providing a frictionless end-user experience. Plus, Aponointegrates with popular applications like Jira, Slack, and Google Workspace, so you can manage access from one central location.

With Apono.io, you can:

  • Automatically grant and revoke access for a seamless user experience
  • Enforce least privilege and separation of duties for better security
  • Monitor user activity and ensure compliance

And much more!

It’s simple to use and easy to set up, so you’ll be up and running in no time. Stop wasting time on access issues and start improving productivity—and security—with Apono.io today.

DevOps Expert Talks: Ask Me Anything With Moshe Belostotsky

In this Q&A session with Moshe Belostotsky, Director of DevOps at Tomorrow.io, we dive into the changing role of DevOps and how security considerations are changing the way software is being built and delivered.

Q: First of all, if you can tell me a little about yourself, what brought you into DevOps?

A: “I was in the world of DevOps even before it was called DevOps and before the Cloud became a thing. Ever since I can remember, I have been doing automation, CI/CD, treading this line between infrastructure automation and enablement, automatic tests, and later on, the Cloud.

I started working with automation at the age of 16 and have been doing it ever since with a 4-year break during my army service, after which I jumped right back in.

Q: What do you like the most about working with DevOps automation?

A: “A couple of things.

First, it’s the variety of work: the number of touch points with the platform, with the different teams, and sometimes with the customers. At its core, DevOps is about collaboration. It’s about breaking down silos between development and operations teams so that they can work together to deliver software faster and more efficiently.

Second, it’s never-ending problem-solving. You are always looking for ways to optimize processes, increase the velocity and optimize the way developers work. It’s also about the efficiency and stability of the production environment.

In a way, being in DevOps allows you to see a bird’s eye view of the entire system. What makes this role very interesting is not being limited to a single domain.

Q: What advice can you give to somebody who is just starting in DevOps?

A: “As an autodidact, I can say that the first thing to know is that you don’t know anything. That’s the baseline and the starting point. And the second thing to realize is that you can solve anything.

Once you know that everything is solvable and that you don’t need to panic when you don’t know how to solve something because you can always gather new knowledge, you can start enjoying the process of problem-solving and optimizing.”

And last but not least, understanding the developers and how they think, and how we can add value by translating the infrastructure to them.”

Q: What do you think makes a good DevOps engineer?

A: “A person with a can-do attitude, a people person, who is always learning, and a problem-solver.

Someone who has that basic understanding that everything is solvable and that we should not take for granted anything at all. Someone who strives to help the developers and understands that we’re here to communicate with people and solve their problems, not just communicate with the computers.”

Q: As a director of DevOps, what are your priorities?

A: “MTTR, MTBF, and LTV, so mean time-to-value, mean time between failures, and mean time to recovery. Those are the measurable KPIs to focus on. And, of course, cost efficiency.”

Q:  As a DevOps leader in your organization, what role does security play in the decisions you make?

A: “Very important. I work closely with the security team.

Collaboration is key. As DevOps, we need to create a single language with the security office. Because eventually, we aim for a single goal – for the company to be successful, to grow, and to avoid security incidents, especially public incidents. These will undeniably be very bad for the company and very bad professionally for all involved. We never want to be in this situation.

But also an important part of DevOps is the developer experience. So when we apply security measures and security restrictions on production environments, we still need to maintain the mean time-to-value KPIs.

So when the developers can’t do their work or have to go a much longer road when trying to achieve their goals, we hurt the company, although we increase security.

If the developer cannot view his environment in production, you cannot access these environments in production, doesn’t have any breaking glass protocol, the recent environment in production, then we hurt mean time to value, and mean time to recovery, which eventually will hurt the company. Although our security is great, we will be out of business.

So balancing developer experience with security is something we constantly have to focus on as DevOps.

Q: As DevOps, what’s the worst ask you get from other teams?

A: “Friday afternoon, a developer decided that he has some spare time to develop. He encounters an issue, and he’s starting to send messages in the DevOps channel.

The channel has two functions; we use it both for standard requests and for urgent requests. So we are always monitoring those channels. And those requests, especially when they’re ambiguous but turn out to be non-urgent, should really wait till Monday morning.”

Q: How can organizations assist their DevOps engineers to be more successful in their jobs?

A: “First is creating the space for and facilitating learning. Most of the DevOps teams are understaffed. And we don’t have time for learning, upskilling, going to meetups, and taking courses. To make the learning part of the job description.”

Q: What would you think is the next big change in DevOps?

A: “I think the two trends to be aware of are serverless and shift-left.

DevOps will require more and more coding skills. We will need to do more and more coding and less infrastructure maintenance. That is why we in DevOps always need to learn and adapt.”

Moshe Belostotsky is the Director of DevOps at Tomorrow.io. With nearly two decades of experience in the field, Moshe is one of the leading minds in the startup nation’s DevOps community. Having worked with companies such as Cisco, Hewlett-Packard, and Fiverr,  Moshe has a wealth of experience in the field. In his current role at Tomorrow.io, he is responsible for managing the entire DevOps department and ensuring that the company’s products are released on time and meet customer expectations. In addition to his role at Tomorrow.io, Moshe is the in-demand thought leader in the DevOps community and frequently speaks at industry events.

The Uber Hack – Advance Persistent Teenager Threat 

Uber, the ride hailing giant, confirmed a major system breach that allowed a hacker access to Vsphere, google workplace, AWS, and much more, all with full admin rights. 

In what that will be remembered as one of the most embarrassing hacks in recorded history, the hacker posted screenshots to the vx-underground twitter handle, from the console of the hacked platforms as proof, and included internal financial data and screenshots of SentinelOne dashboard.

If you are going to hack a role, choose “Incident response team member” for optimal results

Before we dive into the “how”, let’s first explore the role of an Incident Response (IR) team member. When an incident (hack, production failure…) occurs, the incident response team are the company’s “first responders”, when there is a fire, they are the “firefighters”. Due to the importance of their job, IR teams get an unprecedented level of access,
usually when they are on-call – making them an ideal target.   

Zero Trust vs “Uber” Trust 

The hacker, who either targeted the IR team or stumbled upon it by chance, was able to socially engineer a member of the IR team utilizing a technique known as “MFA Fatigue” – bombarding the user with 2nd factor approval requests. The hacker proceeded to contact the IR team member via WhatsApp, posing as an Uber IT support rep. He then claimed the requests will end upon approval and that the IR team member simply needs to approve the login to end the flood of requests.

Following the approval, the hacker was able to enroll his own device as a 2nd factor, giving him everything needed to login to the rest of the applications in the environment.
  

When the hacker was able to access Uber’s internal network, he located a shared folder containing a powershell script with admin credentials and used them to obtain access to the company’s Privileged Access Management (PAM) platform, thus gaining access to Uber’s entire network.

While the principals of “Zero Trust” call to reduce attack surface by segregating networks, apps and access to avoid?, Uber’s architecture provided the user with “Uber” rights. 

The hacker path:

Social Engineering  => DUO MFA => 2nd factor approval =>Added Device to 2nd factor=> VPN access => Viewed internal network => Powershell script with privileged credentials => Access to PAM => GOD_MODE

Centralized Authentication: 

The fault to this ordeal is an inherent one, and a part of every centralized authentication, lets elaborate a bit: back in the day we used to manage credentials per application or data repository, a breached identity “Attack Surface” would apply only to a single applications; a 1:1 attribution between identity and action, but we had to manage a lot of credentials. 

This method was decentralized by nature, from a scalability aspect an impossible task. 


To circumvent the scalability issue we created Identity Providers (IDP) a centralized approach that enabled us to share an identity across applications, using a single set of credentials, increasing our potential attack surface across the organization, understanding this risk we added authentication factors, each with its own flaws. 

The tradeoff between decentralized authentication and IDP led to better scalability, user experience and answered operational needs, it also led to hacks just like the Uber Hack, Centralized Authentication = Once you are in you are in, have fun! 

What can we do? 

But Uber did have a DUA MFA! And SentinelOne! So what did they do wrong? 

Nothing really, they followed industry standards, the problem is that these standards will not protect you once a hacker is in. 

Decentralized authentication is not coming back, nor should it! But what if we took a different approach and decouple authorization from authentication by using dynamic access policies, instead of just adding authentication factors, we can add authorization factors according to risk.

This model (shown above) treats authorization as a dynamic factor that correlates with the “risk circle” adding authorization factors that will provide an extra level of assurance both in verifying the user and preventing human errors caused by standing privileges. 

In Apono we solved this issue by enabling users to bundle permissions and associate them with users, creating Dynamic Access Flows that connect the risk circles above adding Multi Factor Authorization into the access policy.  

Each circle of risk is represented as a permission bundle, to each the admin can create a policy that combines different authorization factors, and a time frame for the access: 

  • User Justification – User must write a reason for needing access 
  • User + Admin Justification – When an access request is created both the requester and admin need to provide a reason to the approver.
  • Owner Approval – Access will be granted when the owner of the groups of permissions approves it 
  • IDP Owner Approval – Only when IDP owner approves the request the access will be granted 
  • Restricted Timely Access – Access is only open to a defined period of time, automatically revoked upon the end of the defined period 

With Apono you will be able to create Declarative Access Policies, defining authorization factors using our declarative access flow wizard 

Using Apono’s Declarative Access Flow Wizard you will be able to create access flows with an array of authorization factors, approver groups, two-step human authorization.

Effective Privilege Management in the Cloud – Mission Impossible?

TLDR: Overprivileged access is a natural consequence of manually granting and revoking access to cloud assets and environments. What DevOps teams need are tools to automate the process. Apono automatically discovers cloud resources and their standing privileges, centralizing all cloud access in a single platform so you don’t have to deal with another access ticket ever again.

How much access to cloud resources do your developers really need?

In the ideal world, you would give access to whoever needs it just for the time they need it, and the “Least Privilege,” (meaning both “Just-in-Time,” and “Just Enough”) access policies would be the norm.

But we don’t live in an ideal world.

Cloud infrastructure is dynamic and constantly changing. Some resources, such as cloud data sets, may include more than one database, each with its own set of access requirements. For example, a user could require read/write rights for one and read-only rights for another.

In theory, you should keep track of all these access rights and revoke and grant them as needed. But in practice, we don’t have the tools to automate cloud access management, which leads us to give more access than we should.

What is overprivileged access?

Overprivileged access is when an identity is granted more privileges than the user needs to do their job. In the cloud, this happens all the time.

For example, a developer needs access to an S3 bucket for a couple of hours each Monday in June to do some testing. After they are done, they won’t need that access again until a sprint with a task requiring it comes up.

If you were to go by the book, you would need to manually give them access and then manually revoke it on Mondays for four weeks.

This is simply not sustainable. The ratio between DevOps and engineers is already 1 to 10, and It’s not possible for DevOps engineers to be constantly dropping what they are doing to provision or revoke access. We’ve got other stuff to do.

When a developer needs access to a sensitive S3 bucket that contains customer data, it’s often not clearly defined what permissions will be enough for the user to do their job. We address this problem by providing more access than we should in order to avoid becoming a bottleneck. As a result, the whole role gets over-privileged permissions. What we’re left with, is an Overprivileged Role Level Access that affects a large number of users, and is not likely to ever be revoked.

Another common way overprivileged access creeps into your cloud stack is when “Read/Write” access is granted to users who need only Read rights for a limited time. An overprivileged identity with Write access can do great damage if it’s compromised.

To make matters worse, managing access is kind of dull. Nothing is less exciting than dealing with another access ticket. Managing access is the task you want to get over with as quickly as possible.

Without automation, it’s impossible to implement granular access provisioning, revoke access in a timely manner, or even just keep tabs on existing policies. And that, folks, is how overprivileged access to cloud resources became the norm.

Why overprivileged access is a problem

Today, overprivileged access is everywhere. And it’s a serious problem for several reasons:

1) Attack Surface = Permissions x Sensitive Cloud Resources

Overprivileged access is one of the biggest security risks in the cloud. In recent years, the vast majority of breaches (81%) are directly related to passwords that were either stolen or too weak.

But it’s not just about passwords. It’s about the way cloud resources are accessed and used.

Overprivileged access significantly increases the blast radius of an attack. When an attacker obtains a set of valid credentials, the permissions linked to those credentials determine what they can and cannot do in your environment.  The more permissions a compromised identity has, the bigger the attack surface.

In the cloud era,  permissions are your last line of defense: the right permissions are what prevent unauthorized identities from accessing your company’s sensitive data. Therefore, tailoring access to the task at hand will drastically reduce the risk.

2) Complexity & Lack of Visibility

Another issue with overprivileged access is that it makes cloud environments more complex than they need to be. When everyone has full access to everything, it’s very difficult to keep track of what’s going on.

This can make it hard to troubleshoot issues, diagnose problems, and comply with regulations.

The harm that can come from overprivileged access is not coming just from malicious actors. All humans make mistakes, and your employees are human.

3) Mistakes will happen

According to 2022 Data Breach Investigations Report, human error is to blame for eight out of 10 data breaches. Overprivileged access significantly increases the risk of such mistakes and the resulting fallout.

The burden of access management falls onto the DevOps teams.

Traditionally, access management has been the domain of IT security, but as cloud adoption increased, the burden of managing cloud access has fallen upon the shoulders of those responsible for the cloud infrastructure.

More and more DevOps engineers are finding themselves in charge of their organizations’ access management policies.

In today’s public cloud reality, provisioning of access is becoming an ever more important part of DevOps engineers’ day-to-day work.  And that’s where the balancing act begins:

  • You want to give developers the freedom to work on whatever they need to get the job done.
  • You know that overprivileged access is a dangerous thing, but you can’t spend every hour of every day stopping what you are doing to give and then revoke access to cloud resources.

A cloud-native approach to access provisioning

Moving to the cloud is a transition towards a more agile way of working, which necessitates a subsequent shift to dynamic permission management.

So what are we to do?

The answer, as with most things in a DevOps engineer’s life, lies in automation. We need to find a way to automate cloud access management so that DevOps engineers can focus on their actual jobs and not spend all their time managing access.

We need a tool that is:

– Easy to use

– Scalable

– Seamless

And that is where Apono comes in.

Apono simplifies cloud access management. Our technology discovers and eliminates standing privileges using contextual dynamic access automation that enforce Just-In-Time and Just Enough Access.

With Apono, It is now possible to seamlessly and securely manage permissions and comply with regulations while providing a frictionless end-user experience.

Are you ready to never have to worry about cloud access provisioning again? Get in touch with us today.

What we can learn from the LastPass hack

LastPass, a password manager with over 33M users reported an unauthorized party hacked into its development environment, the hackers were able to gain access through a single breached developer account. 

Don’t act all surprised, getting hacked is a “WHEN” not an “IF” question 

Everyone gets hacked eventually, the bigger a company is the bigger the target sign on it back. But LastPass is no ordinary company, the risk that is entailed in a service that generates and stores passwords, it is a “Key Master” which means if customers’ password were compromised the attack surface trickles down, potentially affecting customers and their customers/users. 

LastPass reported the breach did not reach any customer data, only the company source code was taken, LastPass User’s rejoice!  but it did take the company two weeks to assure that was the case, which sounds like a long period of time to evaluate the affect of a hack, but its actually not as Allan Liska, an analyst for Recorded Future commenced for Bloomberg:

“While two weeks might seem like a long time to some, it can take a while for incident response teams to fully assess and report on a situation,” he said. “it will take time to fully determine the extent of any damage that may have been as result of the breach. However, for now it appears to not be client-impacting.” Allan Liska 

“We got hacked!” knowing is half the battle, what got hacked is the other half

Some of you might ask: “ Why does it take two weeks?” a valid question, a lot can happen in two weeks, just imagine what a hacker can do with two weeks of unrestricted access or what you can do if you had two weeks off. The reason why it took two weeks is simple, understanding the attack surface of the hack requires knowing the permissions the breach identity has, which are now in the hands of the attacker;   

or in other words:

Breached Users Permissions = Potential Attack Surface

The potential attack surface of a company that stores passwords is enormous, without mapping the different databases and applications the breached developer’s identity had while he was breached required an incident response team to investigate and assess the blast radius of the attack.

Attack Surface Vs Blast Radius

“Attack Surface” represents the potential impact of an attack, meaning the total amount of services, databases, and applications a breached identity had access to.

״Blast Radius״ represents the total impact of the security event, or in other words what are the actions and data that have been taken by the breached user.

To understand a security event attack surface we need to understand which permissions the breached identity had. , To understand the Blast Radius we need to check each action that could have been made by the attacker and investigate the implication of those actions.

WHY does it take two weeks? You might ask because we usually do not revoke access when users are done with it, we give excessive privileges regardless of the task in hand, and lastly, we do not have a proper way to monitor cloud access, which means investigating the blast radius is a tedious task. 

Have no fear, Apono is here

The reason why we created Apono is to solve this exact scenario, Apono’s centralized cloud access management solution takes a “Least Privileges by Default” , mapping cloud access, policies, and resources and attributing them to users, then suggesting how to convert them to “Just-in-Time” and “Just Enough” dynamic policies providing users with a granular level of resources for the amount of time they need it, then the access is automatically revoked! Apono records the entire approval timeline, so you can know who accessed what, when and who approved it.

Apono drastically cuts down the attack surface by assuring granular timely access, our access activity monitoring capabilities assure no standing privileges are jeopardizing your organization. Our 1:1 access attribution capabilities assure investigating the Blast Radius of the attack is a breeze. 

Attack Surface Vs Blast Radius

“Attack Surface” represents the potential impact of an attack, meaning the total amount of services, databases, and applications a breached identity had access to.

״Blast Radius״ represents the total impact of the security event, or in other words what are the actions and data that have been taken by the breached user.

To understand a security event attack surface we need to understand which permissions the breached identity had. , To understand the Blast Radius we need to check each action that could have been made by the attacker and investigate the implication of those actions.

WHY does it take two weeks? You might ask because we usually do not revoke access when users are done with it, we give excessive privileges regardless of the task in hand, and lastly, we do not have a proper way to monitor cloud access, which means investigating the blast radius is a tedious task. 

Have no fear, Apono is here

The reason why we created Apono is to solve this exact scenario, Apono’s centralized cloud access management solution takes a “Least Privileges by Default” , mapping cloud access, policies, and resources and attributing them to users, then suggesting how to convert them to “Just-in-Time” and “Just Enough” dynamic policies providing users with a granular level of resources for the amount of time they need it, then the access is automatically revoked! Apono records the entire approval timeline, so you can know who accessed what, when and who approved it.

Apono drastically cuts down the attack surface by assuring granular timely access, our access activity monitoring capabilities assure no standing privileges are jeopardizing your organization. Our 1:1 access attribution capabilities assure investigating the Blast Radius of the attack is a breeze. 

How we passed our SOC2 compliance certification in just 6 weeks with Apono

We recently went through the SOC2 process and are happy to report that we successfully passed our audit! Generating a SOC 2 Type 1 Report generally takes up to six months. In our case, the entire process took only 6 weeks, and we wanted to share how we did it.

TLDR: We used Apono’s cloud-native privileged access management solution to streamline our access review process and make SOC2 audit much easier for us (and our auditor)

Our SOC2 journey

If you serve customers in regulated industries such as healthcare, finance, or the public sector, you will likely need to obtain SOC2 certification at some point.

For those who don’t know, SOC2 is the gold standard for security certifications. It is becoming increasingly common for SaaS companies to get SOC2 certified to reassure customers that all the necessary controls are in place to protect their data.

SOC2 reports measure a company’s security through the lens of AICPA’s Trust Services Criteria across five major categories:

  • Security – How effectively do you protect critical systems against unauthorized access?
  • Availability – How do you facilitate customer access to systems, including business continuity measures during and after an attack?
  • Processing Integrity – How do you upkeep all promised services’ functionality, including timeliness, accuracy, completeness, and integrity of authorization protocols?
  • Confidentiality – How do you safeguard all information classified as protected?
  • Privacy – How do you safeguard all personal information and personally identifiable information (PII)?

The SOC2 compliance report is a public attestation that your systems and controls have been assessed by an independent auditing firm and that they meet or exceed the standards for security, availability, processing integrity, confidentiality, and privacy.

The SOC2 certification process is notoriously long and arduous, but we are happy to report that we obtained our SOC2 certification in just six weeks from start to finish.

Apono helped us in two ways:

  • Generate access review in a matter of seconds
  • Provide auditors with a live view of access to our production environment

Meeting SOC2 security requirement

SOC2 compliance covers a lot of ground and involves solidifying company policies, including access to sensitive resources covering both physical and digital access control.

We are cloud-native, so physical protections around the data centers don’t apply to us. Access to digital resources is another matter. The problem with cloud resources is you don’t hack; you log into it. That’s why access control is such an important part of SOC2.

SOC2 Access Control Requirements

SOC has several controls for access. Auditors will want to see that you have strong controls around:

  • Who has access to what
  • What can they do with that access
  • How you monitor and restrict access
  • How do you uphold the Least Privilege principle
  • How do you enforce the Separation of duties and roles
  • How do you handle employee onboarding and offboarding

To meet these requirements, you’ll need to generate an access review report that includes:

  • A list of all users and their roles
  • A list of all systems and applications that each user has access to
  • What each user can do with that access (e.g., read-only, write, execute, etc.)
  • Procedures for granting and revoking access

The access review report is one of the most time-consuming and tedious parts of the SOC2 process. It involves manually reviewing Access Control Lists (ACLs) and then comparing them to lists of employees and their job descriptions to see if there are any discrepancies.

Sifting through all of that data is a huge pain, but we were able to generate an access review report in just a few seconds. Apono’s platform automatically and continuously maps out user roles and permissions across all systems and applications. So it was effortless to generate a report that includes all of the information required by SOC2.

Not only did this save us a ton of time, but it also ensured that our access review report was 100% accurate.

Moreover, we could automatically generate an access review report anytime we needed it during the certification process. This was incredibly useful because it meant we could easily re-run the report to reflect any changes in personnel or systems.

This huge time-saver allowed us to focus on other aspects of SOC2 compliance. Going forward, we can easily run the report anytime on demand if there are concerns about potential unauthorized access.

Our auditor was impressed with how quickly we could supply the access information they needed.

Access to production environment: live view

It’s not enough to have controls in place – you also need to be able to monitor and audit access on an ongoing basis.

Auditors will want evidence that you’re regularly reviewing and revoking access.

This is important for two reasons:

  • To make sure that the controls are being followed
  • To be able to detect and investigate misuse of data or systems

Auditors will want to access logs to see who did what when they did it, and from where. We could provide them with something better – a live view of access to our production environment that they could monitor in real time.

This gave them visibility into our entire system and allowed them to see exactly who had access to what resources and what they were doing with that access. We were able to give our auditor a real-time view of who was logged in, what they were doing, and from where. This provided valuable insights and evidence that our access controls were working as intended. This was a huge selling point for our auditor.

Overall, Apono was an invaluable tool for streamlining our SOC2 compliance process. 

But it’s not just about passing the SOC2 compliance certification in record time (although that is a huge plus!). It’s about handling your cloud access in a way that’s secure, efficient, and scalable for the long haul. So if you’re looking for a platform for managing access control and compliance in the cloud, book a demo with Apono today. We’d be happy to show you how our platform can help you become secure and compliant while maintaining your productivity and agility.

Top 5 AWS Permissions Management Traps DevOps Leaders Must Avoid

As born-in-the cloud organizations grow, natively managed Identity and Access Management (IAM) tools are becoming a growing concern. Although DevOps teams tend to bear the burden of cloud IAM provisioning, the operational challenges transcend functional silos. Even when SREs and infrastructure teams are closely aligned with security leaders, using native IAM tools to provision access with granular control is unsustainable. No one would contest the need for authorized personnel to get “Just Enough” access whenever they need it – “Just in Time” (aka JIT). Still, teams managing cloud-first deployments struggle to deliver effective access control at scale. While regulatory compliance requirements can act as a trigger for business continuity enablement, many companies are containing unacceptable levels of risk in the form of “cloud IAM debt”. The following list of cloud permissions management traps may sound familiar to DevOps leaders. Avoiding them is trickier than you might think!

  1. Attempting to solve permissions management as an engineering challenge.
    In a perfect world, any authorized stakeholder could access just enough cloud resources to get the job done “just in time”. In practice, cloud Identity and Access Management  (IAM) policy configurations are not only complex, but a dynamic work in progress. When DevOps teams do attempt to provision just the right mix of AWS IAM configuration accounting for policy types, permission boundaries, and ACLs, the resulting homegrown solution rarely scales over time. Although DevOps and SRE teams own cloud IAM provisioning, risk management considerations define InfoSec governance.  Without clearly defined processes to determine how data governance guardrails can support IAM provisioning, such homegrown solutions cannot address the business challenge. 
  2. Letting compliance data governance requirements define IAM management
    To support smooth operations, most DevOps teams tend to over-provision as a matter of course. As the business matures, this approach does not support risk management considerations (e.g. privileged access to and governance of regulated or otherwise sensitive customer data). Once compliance requirements enter the mix, productivity inevitably suffers.  Without dedicated security controls to address usage attribution, reviews, and approval processes, DevOps teams tend to lose control. 
  3. Ignoring the need for an enterprise-wide user provisioning workflow
    The reality of JIT access requirements tends to be more dynamic than anyone can anticipate. The solution must therefore address the challenge holistically beyond the scope of any single functional team (SREs and DevOps vs infrastructure teams or InfoSec). Although addressing standard ad-hoc scenarios such as on-call personnel or “break glass” access certainly represent a good start, a more thorough analysis tends to uncover multiple use cases to address. Some situations will require a human approver’s meditation – especially when supporting access to PII data assets when absolutely necessary. Time-sensitive access scenarios such as “on-call” shifts are good candidates for unmediated automation.
  4. Neglecting the impact of infrastructure teams
    When ongoing IAM provisioning policies do not address JIT access requirements, support ticket fatigue could overwhelm cloud infrastructure teams. As organizations increasingly rely on manual processes, it is imperative to identify opportunities to reduce backlog. Even a simple requirement to enable CLI access while supporting SSO connectivity can linger for long periods of time. Although tagging conventions can help to address the bigger picture, lack of collaborative planning across functional silos often prevents effective implementation of holistic enterprise-wide solutions
  5. Tolerating standing privileges as a necessary evil
    Security teams are well aware of the benefits of enforcing a zero standing privilege (ZSP) operational model, which eliminates “always on” access and therefore reduces the attack surface dramatically. This straight-forward goal is tricky to achieve beyond the scope of security. Established DevOps success metrics and related priorities rarely address the discovery of standing privileges – let alone a structured operational model to eliminate them entirely. As a result, organizations have come to terms with standing privileges as an unavoidable security blind spot. Interestingly, the benefits of usage monitoring and attribution of identities to resources transcends risk management considerations.  By adopting a “shift left” approach of IAM provisioning, DevOps teams are discovering new opportunities to improve success metrics such as mean time to repair (MTTR). 

Getting cloud IAM provisioning right can only succeed by addressing the manual workflows that currently support multiple teams – namely DevOps, infrastructure, and security. The imperative to remove bottlenecks impacts the business as a whole, but also the success of established functional departments. Once priorities and goals are clearly aligned across departments, the solution is a natural next step. 

Learn how Apono empowers teams to improve performance without compromising on security!