While making the jump to the cloud is easy, it pays to know what you are doing – and how and why you are doing it – once you get there.
While the above statement is largely true, there are some misconceptions about how cloud infrastructure should be used and managed – and what the risks are.
I have spoken with a lot of clients about the importance of knowing what you are doing, how you are doing it, and why you are doing it. In the cloud this is even more important as it is so easy to play fast and loose. And if you’re doing things in an uncontrolled way there is a big risk that you’ll unwittingly expose some attack vectors that hackers can exploit.
The next thing you’ll know is you’ve had data stolen, or ransomware injected into your systems, or maybe you’re mining bitcoin for someone else. And all it took for you to expose your systems to these risks was a few mouse clicks.
To really know what you are doing takes a lot of practice, and practice makes perfect. We have all seen those job adverts where companies are looking for ‘rock stars’ and ‘cloud ninjas’; while these fanciful job titles might seem ridiculous on the surface, competent cloud professionals aren’t far from rock stars or ninjas in terms of their value to your organization.
Doing things with limited knowledge ties you to certain solutions which might not be optimal and might be too complicated; with complicated solutions you run the risk of not knowing which resources you are actually running.
My take on all this is that you need to know your cloud solutions inside out, like the back of your own hand, if you’re going to be able to make them simple and secure. Here’s where automation – one of the cornerstones of every modern cloud deployment worth its salt – comes in. And if your vendor is doing cloud deployments from a graphical user interface let’s have a chat and I can pinpoint all the issues that come with this way of doing things.
Automation brings stability, traceability, and repeatability to the cloud. And it should be implemented to the max wherever possible.
I cannot emphasize enough the importance of automation; it doesn’t matter what the tool is – if it gets the job done and manages version control, that is key.
Automation brings with it the possibility to track what changes have been made, who made them, when they made them, and why. This provides a comprehensive audit trail that we can use to minimize those “oopsie” moments to some extent.
Automation bundled with a cloud-native way of doing things means double the benefit as you can have more control over the resources used to run your applications. You can scale resources on a more granular level to meet the needs of your users, and when there is a quiet moment you can scale down to the bare minimum. Deploying new environments can be automated and done straight from the development team Kanban board, and, vice versa, once an environment is no longer needed it can be removed automatically.
I have a dark past in on-premises computing and it is mind-boggling how quickly the landscape has changed – and I think this is just the beginning. We are on cloud v1 and once AI and the cloud’s next phase hit full speed there is no room for old school.
In the next blog in this series I’ll delve a little deeper into some easy steps you can to take to secure your environment.