In 2017, the Belgian Cost of Cybercrime project (KUL) published the results of an enlightening study aiming to measure the impact of cybercrime, and more broadly cyber attacks, on Belgian Businesses.
We can highligt two results from this paper: First most businesses have been hit by one form or another of cyberattack, some even more than once a year. So, the likelihood of being hit is quite high.
Second, the average cost per incident is relatively low, most of them below 500€, although in some cases, it was above 10.000€. It surely depends on the kind of business you are and on the size of your company. Meaning SME shouldn’t have to spend a fortune in protection measure.
Recently, DHS (US Department of Homeland Security) announced they are developing with private partners a solution to mitigate Telephony Denial of Services (TDOS) against emergency numbers and other critical phone numbers.
For the past years TDOS attacks seems to have flourish in the US. They are often used to claim a ransom to the targeted number owner.
If you have already made a Business Impact Analysis on your telephony system, your probably know how much one day of downtime might cost you. You probably have some solutions in place but, do they protect you against a TDOS?
Don’t forget to add TDOS to your list of threats if it is relevant for your business.
The SSL certificates issued by Israel based Certificate Authority StartSSL (https://www.startssl.com/) are blocked by Google Chrome and Mozilla Firefox since March 2017. Behind what could be just a technical issue, there is some disturbing facts:
First, the reason why Google and Mozilla have decided to progressively block StartSSL (and more importantly WoSign) is the issuance by WoSign, a chinese Certificate Autority, of multiple SSL certificate for Domains for which they didn’t received any mandate and didn’t validate the ownership of the domain by the requester. The first case to be reported to Google was GitHub, the famous Source Code repository. As WoSign had « secretely » bought StartSSL and integrated its infrastructure in its own, StartSSL has been « sentenced » to the similar distrust by most browser than its owning company.
As DNS CAA records are not used by browsers to check if the Certificate Authority of an SSL certificate for a domain is the correct one, it could have allowed someone to impersonate GitHub or at least to lure some users to a fake GitHub site (anyway, GitHub didn’t set his CAA record). Such behavior is unacceptable for any certificate issuer as trust is the cornerstone of the entire SSL certificate paradigm. Google and Mozilla’s reaction seems then proportionate. However, you can imagine the impact of such sentence. For any CA, being withdraw from the list of trusted certificates of the two main browsers is like a death penalty for the CA.
The second disturbing fact is that StartSSL failed (or decided not) to properly inform its customers. Worse, it continues to sell its Class 1 certificate despite the fact they are basically useless. That’s not the kind of commercial decision that will help restore the trust to the Israeli company, even if WoSign has defined a remediation plan aiming at giving more autonomy to StartSSL (see below).
Customers who had paid for the Enterprise Validation have lost their money and are now using blocking certificates. The only cheap and rapid solution to restore access to their website (and keeping the SSL/TLS active) is likely to use LetsEncrypt free certificates.
I don’t know what the future is but I wouldn’t recommend StartSSL to anyone anymore and I doubt any security aware person would. That’s not a good indicator for a bright future.
This past few years, interest and budgets for ethical hackers and pentesters has grown rapidly. They gain more and more visibility (see the Belgian Cyber Security Challenge or the European Cyber Security Challenge). More important, consulting companies are recruiting young and talented hackers by the dozen those last years.
During the last decade, lot of (nor to say most) TV shows and even novels have included or even starred a hacker:
Lisbeth Salander in Millenium,
Harold Finch in Person of Interest,
Felicity Smoak in Arrow,
Elliot Alderson in Mr Robot,
Skye in Marvell’s Agent of Shields,
Christopher Pelant in Bones,
Penelope Garcia in Criminal Minds,
Luther Stickell in Mission Impossible,
and the list goes on.
Nowadays, being an (ethical) hacker is sexy, trendy and well paid. It’s no surprise that a lot of young graduates want to embrace this professional career. As such, it is a good thing as we need more skilled and talented professionals in cyber Security.
However, it might be a bit short sighted as Artificial Intelligence’s powered automated hacking systems are on our doorstep (see DARPA’s Cyber Grand Challenge and other AI powered systems in the links at the bottom of this post).
Nevertheless, that’s not really my point here. With all these young genius at work uncovering our weaknesses, we still don’t have enough talented people to fix the issues.
WE NEED MORE FIXERS!
When I talk about fixers, I don’t only mean people skilled enough to fix the vulnerabilities discovered by our code breakers but also people able to fix governance, processes, organization and people. We need professional who can make effective security awareness (meaning that will make people change their behaviour), people who can implement a flawless IT & security governance. People able to define processes preventing attacks by design. People able to define new strategies and able to implement them (or at least to make people implement them). Person who can understand in which detail the devil is hidden. Hackers just need to find one vulnerability, we have to fix them all. It is less sexy, even more complicated and there is not enough people who wants to fix the problems… but we clearly need more. So, young geniuses, when you’ll be bored of breaking things, please come to the light side and help us fix this mess.
You may have heard that the US federal Judge Thomas Rueter has ruled against Google in their refusal to seize personal emails of one of their customer to the FBI based on the fact that these data were stored in an European Data Center.
While in 2016, in a case against Microsoft, a federal judge ruled that US investigators could not force the company to hand over emails stored on a server in Europe (Dublin in that specific case).
Of course, there is much more at stake here than just access to one customer’s email. There is billions of dollars at stake here. Most companies and individuals in Europe are moving their data to the cloud. The biggest cloud services suppliers in the world are American based companies (Amazon, IBM, Google and Microsoft representing together around 50% of the market) and a large number of European companies are outsourcing their services to these vendors. However, the GDPR (the European General Data Protection Regulation, see also Wikipedia for an overview) requires a strong protection of our personal data (including our emails). As US and EU aren’t totally aligned on this matter, most European companies requires their cloud providers to store and process their data in European Data Centers in order to guarantee the European regulation will be enforced.
And now, this new ruling might jeopardize all that (or at least be the start of it). If the sole fact of having an American based company as a supplier can allow US to bypass the GDPR, would European companies still be allowed to use them to store personal data? Would we see European companies and individuals leaving Gmail, Google apps, AWS, Outlook and other related US based services for European based and owned companies? It would be a big mess… and maybe a huge opportunity for some European challengers.
Why is usability important for security management? Is it even important? Obviously for a lot of people, it is not. And that’s a problem. But what is usability anyway?
According to Wikipedia, and I find the definition pretty accurate, usability is “the ease of use and learnability of a human-made object such as a tool or device. In software engineering, usability is the degree to which software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use”.
In other words, usability is the process of designing things so they can be easily used and mastered by their end users. Usability is not just about design, it is a science. It is about making our environment optimized for our brains and our bodies. As an example, usability is when you put handles to a box so it is easier to lift. Google, the most visited website in the world is an example in terms of usability: straight to the point, one field and you get what you need in one click. It even completes the words for you, as you type. There’s a reason they are number one and it’s called user experience (UX).
Nowadays, usability, neuroergonomics and even neuromarketing are at the heart of successful designs. Whatever you are selling, you better make it easy to use and even sexy. The traditional KISS (Keep it simple and stupid) design requirement has gained an additional “S” for sexy (KISSS, Keep it simple, stupid and sexy). The article I wrote about the ineffectiveness of SPAM awareness session was also an advocacy for the use of cognitive sciences insights in order to design more effective awareness material.
Why do I care?
If you are a product manager for a startup, you are probably already aware of all the usability requirements for your products. That’s were startups win the war against the old dinosaurs: “better engineered products with better usability and even sexiness”. We all learned from the master’s success: Apple. Steve Jobs knew the rules to make something usable, less buttons. Sleek design is all about simplicity.
But if you are working in security management, or as a security project manager, or even as a security architect, it seems it is more likely that you won’t care about usability. You might think that your job is to make your company secure, not sexy. And you’re right about that. Except that, when it comes to humans, you’re probably failing (in a large part). You may think: « These stupid end-users still don’t get it. » Of course, they still manage to use weak passwords. If you force strong passwords, they write them down or they use the same everywhere. They still don’t know the security policies. They watch you’re very nice slide you showed them during the mandatory security training during their induction but the next day they are already sharing their passwords with their colleagues. Don’t speak about their inability to spot a fishing attempt! Let’s not speak about your system administrators. These fools who believe they are the kings of the realm and have left so many vulnerability open in their system that the latest vulnerability report you received was so long you couldn’t finished it in one day. Hopefully, you will make a strong point during the next security steering committee to ensure these operation guys’ boss understands he must bring them back to the righteous path.
Ring a bell? Not even a little bit? I think so.
If we believe an old saying, wisdom is being able to differentiate between what you can change and what you can’t. The goal here is to focus your energy and your efforts where it matters. So, think again about your problems. What did you do? You made awareness sessions? You wrote very thorough policies and standards? You made sure they were obliged to read them, to sign with their blood that they had read your literature and that they will abide to your rules?
Did it work? How well? Be honest, some miscreants continue to refuse to follow the rules of the holy god of security. They are probably psychopaths! Or could they be just humans? What if you could increase the probability they will read your policies. Even better, what if you could improve the odds of having them changing their behaviours and embracing your security culture? You don’t believe in Santa Claus? Me neither, but I do believe in sciences!
Neuroergonomics & neuromarketing of security!
Neuroergonomics and neuromarketing are the catchwords to refer to the use of social psychology and neuro-cognitive sciences to improve your desire to use a product and to improve your ability to handle concepts, to remember things or to become addict to some applications (think about Facebook or Twitter). If people can influence what you eat, what you drink, what you wear, what you watch or what you read, why couldn’t we use this knowledge to change your people’s attitude towards security?
Does it worth it? Well, are you already paying people to communicate, to make videos, to draw cartoons but you still have too many incidents and non-compliance? Yes, so maybe you should start investing in better designed solution and put usability as a requirement for all the projects and for all the tools or “product” security wants to sell.
If you have an Intranet, your security policies must one click away from the first page.
You must have a clear organization, a hierarchy and a search engine allowing anybody to quickly find the policy he needs or the procedure.
Policies should go straight to the point, from the reader’s point of view, as soon as the first pages.
Forget lawyers or technical talks, use common vocabulary.
Do’s and Don’t are likely more efficient than long descriptions.
Use words and situation your audience are familiar with.
Ensure your rules are translated into actions in their process and procedures.
Ensure these procedures are pragmatic and easy to read.
Use pictures, screenshots, beautifully designed templates. Make it look more like a fashion magazine than an old book.
Use positive words. Any command that can be better performed by a dead man is a bad command (example: « Don’t use short passwords« … a dead man can do that very well. Rather prefer « use long secure password« ).
Group similar things together.
Be consistent. You even better be congruent (use multiple association together) like Red + Triangle to signal Don’ts and Green + Checkbox to signal Do’s. Keep consistency with the colors (Red Negative, Green, positive).
Use consistently the same word to designate one thing. Even if synonyms can make reading less annoying, always using the same word to designate one object or concept makes it easier to understand (even more for new concepts)
Keep it as short as possible (More than 10 pages, is clearly too much)
Use symbols, signals, icons, pictures
Keep the rule of 3 in mind: if you want to explain a concept, break it down to 3 parts/steps/components, then explain the 3 sub-concepts (using 3 other steps/concepts/parts) and so on until people can understand it. You can go up to 5 « objects » but not higher.
Imbed security processes into existing processes.
If a process works, don’t fix it.
If you can streamline it, do it, even if it is not you first job. Making people life easier will facilitate the acceptance of the controls and it might even improve the attitude of people towards security.
Create links between all processes so they can benefit from each other e.g. ensure Vulnerability scans feeds the CMDB to ensure consistency. (It is supposed to be like that in a perfect world, but that’s just theory)
Forget long swim lane drawings or decision trees spanning on 3 pages, keep it short by splitting the process.
Changing behavior is something we do out of emotion, not based on rational thinking. Even if rational thoughts can lead to a change, we initiate this change only if we connect these thoughts with some emotion.
Use real concrete situation (something that happened or could happened)
They must be relevant for your audience (use scenario involving your audience, allowing them to identify themselves to the character)
Use as much as possible what they already know well (places, situations, products, application, organization, but also more personal things kids, sports, cooking, walking in the street, …)
Show them the concrete consequence on people when they don’t comply with the rules or the secure behavior (its easier to have feelings toward people than organization)
Foster self-identification to your character by using little positive details to which your audience can relate to (« Sam likes to take a coffee with his colleagues, Alice likes
Songs, rimes, jokes, kittens, anything that will be outstanding will help memorize. So use it when it is important (if you use the same trick too often, its efficiency tend to fade down)
Associate non-« sexy » items (like security rules) with more attractive one (a nice place, a smile, a cute cat picture, a beautiful woman – yes, it works for both man and woman -, a good song)
Repeat, repeat & repeat the message but change the format so it doesn’t get boring and so you can use various way to reach people.
We are all different, what works for you doesn’t absolutely work for everybody.
PS: Yes, I could make this list more « sexy » and it will likely come, but it will be in the (near) future 🙂
When you’re working in the security industry, being paranoid is kind of natural (or is it the other way around?). So, when you see how easy people, processes and technologies can be hacked, you become rapidly suspicious of anything. We all know bad things can happen and most of the time we try to mitigate the risks (without even thinking too much about it). Business as usual, so to speak. However, while I have a good idea of the risks our future is bringing to us (what makes me even less worried about my business’ future), it seems that most people don’t imagine how much danger Internet will bring to them. So here are some clues.
The new buzzword that has a lot of attention in the media lately is probably IoT: The Internet of things. According to the media, it’s IoT who allowed hackers to put websites like Amazon and Netflix on their knee for a few hours on October 21st. But that’s a mistake. Although IoT has led to some specific new technologies like Bluetooth 4.1 or ZigBee to accommodate the low consumption and the low cost requirement necessary to embed technologies in nearly all objects, it is probably a mistake to see IoT like something new or something different. As Bruce Schneier said recently in front of the US congress, we should not see this has objects with computers in it (and an Internet connection) but rather see it as computer that do things. A Tesla is a computer with wheels (and when you see how Tesla manage its updates and is manufacturing process, it is closer to the Software industry than to the car industry way of working), a smartphone is a computer with a microphone and a 4G connection, a connected fridge is a computer with an extra cooling system, and so on.
Bottom line, these connected objects are all computers and we must treat them like it. So, like for all computers when it comes to managing security, we should think about patch management, access control, hardening, change management, release management, network segregation, encryption, key management, user awareness and training and all these processes and best practices. Unfortunately, the issue is that most connected object manufacturers didn’t spend enough time and money in designing secure objects, easily upgradable, with strong and secure communication protocols. Consequently, the future is now… and we are not ready for it.
But what is our future? Let’s get a glimpse at it. In the tenth episode of the second season of “Homeland”, Nicholas Brody help terrorists to kill a political figure by giving them his pacemaker serial number, allowing them to hack it and induce a heart attack.
In another TV show, “Blacklist”, a computer genius triggers remotely the airbag of a car while driving, causing the car to crash and the death of its driver.
Is this Science-Fiction? Unfortunately, not anymore! Exploits on « smart » cars become more and more frequent. More recently, a British and a Belgian researcher have devised a wireless wounding attack on pacemakers (1). While the latter exploit need specific and rather costly hardware (3 to 4.000€), we are just one step away of having a ZigBee or BT 4.2 interface. Do you wanna kill someone with your smartphone? Don’t worry, you won’t have to wait too long.
At the same time, as other device with less deadly capabilities are spreading over the world, they provide a potential army of unsecure devices that can be used for Distributed Deny of Service attacks, like it was seen recently, but, why not, to perform parallel tasking, helping to brute force passwords, crack cryptographic keys or hide communication sources by bouncing thousand of times on these little soldiers that we provide to these hackers. Nice isn’t it? We purchase the devices that will be used against us in the near future. To be honest, for most people, including for a lot of security specialist, it is not easy to make the difference between a secure IP camera and an insecure one, simply because we don’t have time to test everything and there is no useful and relevant certification for that. So think about the number of « computers » you have at home: Your internet router, you tablet, your PC or your Mac, your smartphones, your videosurveillance camera, your printer, your TV box, your Bluray player, your « smart » TV, your alarm, your new « connected » fridge, your smart thermostat, the PSP of your kids, the IP doorbell and so on… Think about it, in your home alone, you may have more than 10 little future soldiers for the next hacker’s army. Android, iOS or IP cameras, they nearly all have exploitable vulnerabilities.
So, we have an army and we have soon legion of potential targets for the new kind of attack: DoL attacks (Denial of Life). Imagine ransomware targetting your pacemaker, large scale attack on cars to cause traffic jams or worse, new hitmans (version 3.0) changing the medication of patients in hospital, overdosing people. Just watch any episode of « Person of Interest », they were just a few inches away from the actual reality… and we are getting there.
It sounds crazy, isn’t it? As bruce Scheneier said, Internet is not that fun anymore. It’s not a game anymore. Things are getting serious and we should act accordingly. Not only at government level but also in industries and in the civilian world. We should ask our suppliers, our manufacturers to secure their devices, to make them safe AND easy to control.
In the past months, the press made public different security incidents involving companies being victims of ransomware (1)(2). Most of the time, a ransom had to be paid in Bitcoins. It’s logical as Bitcoins are much easier and cheaper to launder the money and hide the recipient than traditional money laundering circuits.
You may decide that dealing with cyber criminals is unacceptable (like for terrorists or kidnappers) but if you don’t have such policies and the amount of the ransom is lower than the overall cost of restoring your services by yourself (including manpower, business losses, public image), you may decide to pay the price. In such case, time is of the essence. In order to limit the impact and to comply with criminal’s conditions, you might have no more than 48 or even just 24 hours to pay your “lack-of-sufficient-security fine”.
But, how do you pay in Bitcoins and keep it under the radar in such a short amount of time. Imagining the time spent debating the question “do we pay or not”, the time left to actually pay will likely be very short. So, you better have your Bitcoin wallet ready and loaded or some agreement with a trusted Bitcoin exchange platform to guarantee the required discretion. Bottom line, nowadays, it might become wise to include a Bitcoin wallet in your Disaster Recovery Plan.
Whatever you’ll decide, decide now and be prepared.
Phishing and spear phishing campaigns become more and more elaborate, hence more difficult to identify and consequently more successful. Crelan’s 70 million € loss, early 2016 is a good example of the potential impact of such a successful social engineering attack.
As automated security systems are unlikely to detect and block the most elaborate and targeted attacks (as they need a significant number of similar emails to trigger their alerts), security officers are left with security awareness campaign focusing on developing skills to detect (spear) fishing attacks to try to mitigate this risk. It’s logical, it’s what security standards advise you to do but watch out you may be doing more harm than good!
One of the first mistakes in this approach is to consider awareness (or communication) as a goal. Any communication is aimed at instilling a change in its recipient(s). The aim of an awareness campaign is likely to change people’s behaviour and attitude so they pay more attention to the source of their emails, their contents and the rightfulness of what is asked to them. So basically, we should first have a measure of the current situation and aimed at a certain improvement in our “smart” metrics. The most obvious and significant one being: How many people will fall for a (spear) phishing email.
How do we usually do that? Often by a combination of training, online training, posters and “homemade” phishing campaigns to measure the exposure of the company and tickles our employees. In such case, we appeal on fear. Fear to contribute to a security incident, to a fraud, to a loss of money, fear to get fired.
Fear appeal is used to leverage behavioural changes as one believe the emotional reaction caused by fear will increase the likelihood of the occurrence of the appropriate, secure, behaviour. You better think twice as, like it is often the case, devil is in the details.
Fear appeal effectiveness is still a debatable question (that’s the principle of science) but mainly because it might works under some conditions. In their “Appealing to Fear: A Meta-Analysis of Fear Appeal Effectiveness and Theories” article, Tannenbaum et al. (2015) have analysed 217 articles on the subject and found few conditions making fear appeal ineffective while effects seem most apparent in women and for one-time behaviours.
However, in a review of 60 years of studies on fear appeal, Ruiter et al. (2014) “concluded that coping information aimed at increasing perceptions of response effectiveness and especially self-efficacy is more important in promoting protective action than presenting threatening health information aimed at increasing risk perceptions and fear arousal”. A 2014 study of Kessels et al. using event-related brain and reaction times found that health information arousing fear causes more avoidance responses among those for whom the health threat is relevant for them.
Still, it seems there is some consensus regarding some specific conditions to be met by such communication: the communication must provide, just after the fear arousal, a solution to allow the audience to reduce this fear with a sense of self-efficacy, or, to say it simply, we must provide a simple way for our audience to fix the issue, being an easy to follow behaviour (one that doesn’t require too much psychological and physical energy). If our solution is so complex that it will (or the thought of using it) generate more stress than the feared event, our brain will likely avoid this behaviour and deny the reality of the risk (and the fear).
Latest researches in neurosciences (and more specifically in the field of neuroergonomy) provide some guidance to shape our message and solution in order to allow our audience to easily grab our communication and adopt the desired behaviour.
Like for most communication, we must avoid to saturate the working memory. What does it means? If we receive too many information at once, our brain is not able to process it at once. It is like for a lift. If there is more people trying to enter than the lift capacity, the lift is not going to move and will be stuck. It is the same for our brain. If we saturate the place where the information is stored in order to be processed (what we call the working memory).
The average span of the human’s working memory is 5 objects or, if we use Husserl’s terminology, noema. For most people, this span is between 3 and 7 objects.
But, what is an object (or noema) in that context? If I give you a phone number digit per digit (let say: 1,5,5,5,1,2,3,4,4,6,9), it will be hard for you to memorize the 11 digits of this number, each digit being an object. But, if we combine some digits together in small numbers (1, 555, 123, 44, 69), it will be easier to remember. The reason behind it being that these small numbers are also objects (noema) for our working memory and in that case, we don’t saturate it as there is only 5 objects (so, within the average memory span).
Why are the small numbers an object and not the large one? Simply because we are used to them. If you are bone in 1980, this number can become an object (as you are quite well acquainted with it) while 1256 could require 2 noema (12 and 56).
The same is true with words. Well known words (and their associated concepts) are easier to process. It is why I put multiple time the word “noema” (likely to be a new name for most readers) with the word “object” (a quite common word and clear concept) so it can be used as an “handle” to better “grasp” the new concept of “noema”. Similarly, using the metaphor of the “handle” to “grasp” a concept ease the understanding (the grasp) of the concept.
To summarize, our solutions, our expected new behaviours, must be as close as possible to something we already know in order to make it easier to grasp.
As a concrete example, if you want your user to check the validity of an email sender’s domain name (just that concept is not that easy to understand for a lot of people, so what’s on the right of the @ in an email address), you should provide a tool available in the first level of the menu or a link in the favourites website. The best thing would be to have the information integrated in the email or at a click from it.
E-commerce websites have already well integrated such concepts. They understood long ago that if you want to have a client ordering something, he must find it and be able to order it with 3 clicks or less. You maybe know the saying: “the best place to hide a body is on the second page of a Google search”. Meaning? Most people don’t go to the second page, it is a click too far.
Using pictures, drawings (simple one, keep the 3 to 7 objects rules in mind), stories, jokes help memorizing. Anything that might be relevant to the concept or totally outstanding might help too. Emotions help to memorize. If you scare people first, making them laugh or smile with your “solution” might allow memorizing it. Go kittens! (see https://www.ezonomics.com/stories/how-pictures-of-kittens-can-help-you-manage-money/).
Also, do not forget a basic principle of behaviourism… the sooner the better. If you want to foster an action, the reward must come very soon, ideally immediately, after the action. So, if you have people clicking on a link in a “test” phishing email, you may scare them by pointing their mistake but you should also immediately provide a way to avoid this experience the next time by providing a few quick tips on what they did wrong and how they should do it the next time.
Here is a nice example of a video playing just a bit on the fear and providing advices in a non-threatening, aesthetic (it matters too) and very simple way (by http://www.nomagnolia.tv/).
For years now, Information security is a fast growing market. At least for a couple of years, the cyber security market is growing fast. Even in these times of budget cut in many sectors, quite often the cyber security department manages to negotiate an increase of its operational budget. That’s significant, isn’t it? Moreover, nowadays it becomes nearly impossible to ignore the wave of “cyber-“ words: cybercrime, cyberterrorism, cybersex or cyberbullying.
You could not have missed also the news about the CERT.be, the federal cyber emergency team (CERT used to be the Computer Emergency response team, likely less “sexy” than Cyber emergency Team) which is, according to its website, “a neutral specialist in Internet and network security” (So Cyber security is Internet and Network Security?). With the CERT.BE, you probably also read about the Belgian Center for Cyber-security (CCB). Neither could you haven’t noticed the buzz around the new Belgian Cyber Security Coallition or the 1.8 billion € allocated by the European Commission to a private-public partnership made to increase Cyber Security. In the latter, the private sector is being represented by the newly born European Cyber Security Organisation (ECSO). That’s a lot of Cyber-related news, isn’t it? Does Azimov’s vision become a reality? It sure sounds like we are in one of his Robots series book.
But what does Cyber mean? How is Cyber Security different from Information security or IT security? Which one of both is it?
According to the NIST, Cybersecurity is “The process of protecting information by preventing, detecting, and responding to attacks”. So, is it Information Security? But according to the new worldwide reference, Wikipedia, Cyber is « part of the “Internet-related prefixes added to a wide range of existing words to describe new, Internet- or computer-related flavors of existing concepts, often electronic products and services that already have a non-electronic counterpart”. So, Cyber Security should be the Internet or Computer related flavor of information security that we used to call IT security. But is it?
Because lately I’ve heard the “cyber-buzzwords” used in so many different contexts by so many person (including some executive clearly not knowing what they were talking about), I have difficulties to understand what we are talking about exactly.
Understand me well, I like the fact that our country’s leaders finally decided to address the increase of the Internet related threats more seriously. As our risk surface is drastically expanding, it is more than time to address those risks at a more global level (but we are still far from a clearly necessary worldwide cybersecurity agency, for a lot of obvious political reasons). I also like the fact that my clients’ board of directors give more focus to “cybersecurity”, whatever they think it is. At last, it provides us a momentum to raise awareness and improve the governance maturity to the necessary level.
What I don’t like in the “Cyber” fashion, is having a so important subject becoming more and more vague and focused, again, on the technological aspects. With the new buzzword came a lot of new supposed-to-be-panacea products claiming they will solve all the problems overnight (or in a few months, but at our timescale, it is the same). I heard of CISO (Chief Information Security Officer) being rebranded CCSO (Chief Cyber Security Officer).
Is it really a progress? For years we fought to have the CISO positions created at a board level in order to get out of the IT ghetto. The aim was to be also present where information security belongs: in the organizations processes and workforce. In 2016, the latest IBM security survey still attributes 60% of attacks to inside jobs. 1 employee out of 5 is ready to sell his corporate’s network credentials. The biggest weaknesses are still in the business processes and in the human being behind them. Most ethical hackers and red team members know that they don’t need a zero-day exploit to get into a target’s systems, they just need a charming smile and a couple of beer to get what they need to get in. With all the good this new Cyber buzzword brings, there is an evil: we are going back to a computer and technologically focused perception of corporate security issues. Human, processes and facilities are relegated to the second position while they still represent more than 70% of the risks. Does it make sense? Is Cyber Security an evil buzzword after all?
Few will share this article as a lot of cyber security professionnal won’t dare to challenge the marketing machine that is actually feeding them. And as I wrote, there was some good coming out of this, but it is necessary to see all the side impacts and ensure marketing people are not the one deciding where you should put your focus.