Why is usability important for security management?

Why is usability important for security management? Is it even important? Obviously for a lot of people, it is not. And that’s a problem. But what is usability anyway?


According to Wikipedia, and I find the definition pretty accurate, usability is “the ease of use and learnability of a human-made object such as a tool or device. In software engineering, usability is the degree to which software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use”.

In other words, usability is the process of designing things so they can be easily used and mastered by their end users. Usability is not just about design, it is a science. It is about making our environment optimized for our brains and our bodies. As an example, usability is when you put handles to a box so it is easier to lift. Google, the most visited website in the world is an example in terms of usability: straight to the point, one field and you get what you need in one click. It even completes the words for you, as you type. There’s a reason they are number one and it’s called user experience (UX).

Nowadays, usability, neuroergonomics and even neuromarketing are at the heart of successful designs. Whatever you are selling, you better make it easy to use and even sexy. The traditional KISS (Keep it simple and stupid) design requirement has gained an additional “S” for sexy (KISSS, Keep it simple, stupid and sexy). The article I wrote about the ineffectiveness of SPAM awareness session was also an advocacy for the use of cognitive sciences insights in order to design more effective awareness material.

Why do I care?

If you are a product manager for a startup, you are probably already aware of all the usability requirements for your products. That’s were startups win the war against the old dinosaurs: “better engineered products with better usability and even sexiness”. We all learned from the master’s success: Apple. Steve Jobs knew the rules to make something usable, less buttons. Sleek design is all about simplicity.

But if you are working in security management, or as a security project manager, or even as a security architect, it seems it is more likely that you won’t care about usability. You might think that your job is to make your company secure, not sexy. And you’re right about that. Except that, when it comes to humans, you’re probably failing (in a large part). You may think: « These stupid end-users still don’t get it. » Of course, they still manage to use weak passwords. If you force strong passwords, they write them down or they use the same everywhere. They still don’t know the security policies. They watch you’re very nice slide you showed them during the mandatory security training during their induction but the next day they are already sharing their passwords with their colleagues. Don’t speak about their inability to spot a fishing attempt! Let’s not speak about your system administrators. These fools who believe they are the kings of the realm and have left so many vulnerability open in their system that the latest vulnerability report you received was so long you couldn’t finished it in one day. Hopefully, you will make a strong point during the next security steering committee to ensure these operation guys’ boss understands he must bring them back to the righteous path.

Ring a bell? Not even a little bit? I think so.

If we believe an old saying, wisdom is being able to differentiate between what you can change and what you can’t. The goal here is to focus your energy and your efforts where it matters. So, think again about your problems. What did you do? You made awareness sessions? You wrote very thorough policies and standards? You made sure they were obliged to read them, to sign with their blood that they had read your literature and that they will abide to your rules?

Did it work? How well? Be honest, some miscreants continue to refuse to follow the rules of the holy god of security. They are probably psychopaths! Or could they be just humans? What if you could increase the probability they will read your policies. Even better, what if you could improve the odds of having them changing their behaviours and embracing your security culture? You don’t believe in Santa Claus? Me neither, but I do believe in sciences!

Neuroergonomics & neuromarketing of security!

Neuroergonomics and neuromarketing are the catchwords to refer to the use of social psychology and neuro-cognitive sciences to improve your desire to use a product and to improve your ability to handle concepts, to remember things or to become addict to some applications (think about Facebook or Twitter). If people can influence what you eat, what you drink, what you wear, what you watch or what you read, why couldn’t we use this knowledge to change your people’s attitude towards security?

Does it worth it? Well, are you already paying people to communicate, to make videos, to draw cartoons but you still have too many incidents and non-compliance? Yes, so maybe you should start investing in better designed solution and put usability as a requirement for all the projects and for all the tools or “product” security wants to sell.



  • If you have an Intranet, your security policies must one click away from the first page.
  • You must have a clear organization, a hierarchy and a search engine allowing anybody to quickly find the policy he needs or the procedure.
  • Policies should go straight to the point, from the reader’s point of view, as soon as the first pages.
  • Forget lawyers or technical talks, use common vocabulary.
  • Do’s and Don’t are likely more efficient than long descriptions.
  • Use words and situation your audience are familiar with.
  • Ensure your rules are translated into actions in their process and procedures.
  • Ensure these procedures are pragmatic and easy to read.
  • Use pictures, screenshots, beautifully designed templates. Make it look more like a fashion magazine than an old book.
  • Use positive words. Any command that can be better performed by a dead man is a bad command (example: « Don’t use short passwords« … a dead man can do that very well. Rather prefer « use long secure password« ).
  • Group similar things together.
  • Be consistent. You even better be congruent (use multiple association together) like Red + Triangle to signal Don’ts and Green + Checkbox to signal Do’s. Keep consistency with the colors (Red Negative, Green, positive).
  • Use consistently the same word to designate one thing. Even if synonyms can make reading less annoying, always using the same word to designate one object or concept makes it easier to understand (even more for new concepts)
  • Prefer lists
  • Keep it as short as possible (More than 10 pages, is clearly too much)
  • Use symbols, signals, icons, pictures
  • Keep the rule of 3 in mind: if you want to explain a concept, break it down to 3 parts/steps/components, then explain the 3 sub-concepts (using 3 other steps/concepts/parts) and so on until people can understand it. You can go up to 5 « objects » but not higher.


  • Imbed security processes into existing processes.
  • If a process works, don’t fix it.
  • If you can streamline it, do it, even if it is not you first job. Making people life easier will facilitate the acceptance of the controls and it might even improve the attitude of people towards security.
  • Create links between all processes so they can benefit from each other e.g. ensure Vulnerability scans feeds the CMDB to ensure consistency. (It is supposed to be like that in a perfect world, but that’s just theory)
  • Forget long swim lane drawings or decision trees spanning on 3 pages, keep it short by splitting the process.


  • Changing behavior is something we do out of emotion, not based on rational thinking. Even if rational thoughts can lead to a change, we initiate this change only if we connect these thoughts with some emotion.
  • Use real concrete situation (something that happened or could happened)
  • They must be relevant for your audience (use scenario involving your audience, allowing them to identify themselves to the character)
  • Use as much as possible what they already know well (places, situations, products, application, organization, but also more personal things kids, sports, cooking, walking in the street, …)
  • Show them the concrete consequence on people when they don’t comply with the rules or the secure behavior (its easier to have feelings toward people than organization)
  • Foster self-identification to your character by using little positive details to which your audience can relate to (« Sam likes to take a coffee with his colleagues, Alice likes
  • Songs, rimes, jokes, kittens, anything that will be outstanding will help memorize. So use it when it is important (if you use the same trick too often, its efficiency tend to fade down)
  • Associate non-« sexy » items (like security rules) with more attractive one (a nice place, a smile, a cute cat picture, a beautiful woman – yes, it works for both man and woman -, a good song)
  • Repeat, repeat & repeat the message but change the format so it doesn’t get boring and so you can use various way to reach people.
  • We are all different, what works for you doesn’t absolutely work for everybody.

PS: Yes, I could make this list more « sexy » and it will likely come, but it will be in the (near) future 🙂

Will IoT kill us someday?

herzschrittmacher_auf_roentgenbildWhen you’re working in the security industry, being paranoid is kind of natural (or is it the other way around?). So, when you see how easy people, processes and technologies can be hacked, you become rapidly suspicious of anything. We all know bad things can happen and most of the time we try to mitigate the risks (without even thinking too much about it). Business as usual, so to speak. However, while I have a good idea of the risks our future is bringing to us (what makes me even less worried about my business’ future), it seems that most people don’t imagine how much danger Internet will bring to them. So here are some clues.

The new buzzword that has a lot of attention in the media lately is probably IoT: The Internet of things. According to the media, it’s IoT who allowed hackers to put websites like Amazon and Netflix on their knee for a few hours on October 21st. But that’s a mistake. Although IoT has led to some specific new technologies like Bluetooth 4.1 or ZigBee to accommodate the low consumption and the low cost requirement necessary to embed technologies in nearly all objects, it is probably a mistake to see IoT like something new or something different. As Bruce Schneier said recently in front of the US congress, we should not see this has objects with computers in it (and an Internet connection) but rather see it as computer that do things. A Tesla is a computer with wheels (and when you see how Tesla manage its updates and is manufacturing process, it is closer to the Software industry than to the car industry way of working), a smartphone is a computer with a microphone and a 4G connection, a connected fridge is a computer with an extra cooling system, and so on.

Bottom line, these connected objects are all computers and we must treat them like it. So, like for all computers when it comes to managing security, we should think about patch management, access control, hardening, change management, release management, network segregation, encryption, key management, user awareness and training and all these processes and best practices. Unfortunately, the issue is that most connected object manufacturers didn’t spend enough time and money in designing secure objects, easily upgradable, with strong and secure communication protocols. Consequently, the future is now… and we are not ready for it.

But what is our future? Let’s get a glimpse at it. In the tenth episode of the second season of “Homeland”, Nicholas Brody help terrorists to kill a political figure by giving them his pacemaker serial number, allowing them to hack it and induce a heart attack.

In another TV show, “Blacklist”, a computer genius triggers remotely the airbag of a car while driving, causing the car to crash and the death of its driver.

Is this Science-Fiction? Unfortunately, not anymore! Exploits on « smart » cars become more and more frequent. More recently, a British and a Belgian researcher have devised a wireless wounding attack on pacemakers (1). While the latter exploit need specific and rather costly hardware (3 to 4.000€), we are just one step away of having a ZigBee or BT 4.2 interface. Do you wanna kill someone with your smartphone? Don’t worry, you won’t have to wait too long.

At the same time, as other device with less deadly capabilities are spreading over the world, they provide a potential army of unsecure devices that can be used for Distributed Deny of Service attacks, like it was seen recently, but, why not, to perform parallel tasking, helping to brute force passwords, crack cryptographic keys or hide communication sources by bouncing thousand of times on these little soldiers that we provide to these hackers. Nice isn’t it? We purchase the devices that will be used against us in the near future. To be honest, for most people, including for a lot of security specialist, it is not easy to make the difference between a secure IP camera and an insecure one, simply because we don’t have time to test everything and there is no useful and relevant certification for that. So think about the number of « computers » you have at home: Your internet router, you tablet, your PC or your Mac, your smartphones, your videosurveillance camera, your printer, your TV box, your Bluray player, your « smart » TV, your alarm, your new « connected » fridge, your smart thermostat, the PSP of your kids, the IP doorbell and so on… Think about it, in your home alone, you may have more than 10 little future soldiers for the next hacker’s army. Android, iOS or IP cameras, they nearly all have exploitable vulnerabilities.

So, we have an army and we have soon legion of potential targets for the new kind of attack: DoL attacks (Denial of Life). Imagine ransomware targetting your pacemaker, large scale attack on cars to cause traffic jams or worse, new hitmans (version 3.0) changing the medication of patients in hospital, overdosing people. Just watch any episode of « Person of Interest », they were just a few inches away from the actual reality… and we are getting there.

It sounds crazy, isn’t it? As bruce Scheneier said, Internet is not that fun anymore. It’s not a game anymore. Things are getting serious and we should act accordingly. Not only at government level but also in industries and in the civilian world. We should ask our suppliers, our manufacturers to secure their devices, to make them safe AND easy to control.

To be continued…

For more details…


Should companies create Bitcoin accounts to be ready to pay ransoms?

In the past months, the press made public different security incidents involving companies being victims of ransomware (1)(2). Most of the time, a ransom had to be paid in Bitcoins. It’s logical as Bitcoins are much easier and cheaper to launder the money and hide the recipient than traditional money laundering circuits.

You may decide that dealing with cyber criminals is unacceptable (like for terrorists or kidnappers) but if you don’t have such policies and the amount of the ransom is lower than the overall cost of restoring your services by yourself (including manpower, business losses, public image), you may decide to pay the price. In such case, time is of the essence. In order to limit the impact and to comply with criminal’s conditions, you might have no more than 48 or even just 24 hours to pay your “lack-of-sufficient-security fine”.

But, how do you pay in Bitcoins and keep it under the radar in such a short amount of time. Imagining the time spent debating the question “do we pay or not”, the time left to actually pay will likely be very short. So, you better have your Bitcoin wallet ready and loaded or some agreement with a trusted Bitcoin exchange platform to guarantee the required discretion.  Bottom line, nowadays, it might become wise to include a Bitcoin wallet in your Disaster Recovery Plan.

Whatever you’ll decide, decide now and be prepared.

Your phishing awareness campaign may do more harm than good

Phishing and spear phishing campaigns become more and more elaborate, hence more difficult to identify and consequently more successful. Crelan’s 70 million € loss, early 2016 is a good example of the potential impact of such a successful social engineering attack.

As automated security systems are unlikely to detect and block the most elaborate and targeted attacks (as they need a significant number of similar emails to trigger their alerts), security officers are left with security awareness campaign focusing on developing skills to detect (spear) fishing attacks to try to mitigate this risk. It’s logical, it’s what security standards advise you to do but watch out you may be doing more harm than good!

One of the first mistakes in this approach is to consider awareness (or communication) as a goal. Any communication is aimed at instilling a change in its recipient(s). The aim of an awareness campaign is likely to change people’s behaviour and attitude so they pay more attention to the source of their emails, their contents and the rightfulness of what is asked to them. So basically, we should first have a measure of the current situation and aimed at a certain improvement in our “smart” metrics. The most obvious and significant one being: How many people will fall for a (spear) phishing email.

How do we usually do that? Often by a combination of training, online training, posters and “homemade” phishing campaigns to measure the exposure of the company and tickles our employees. In such case, we appeal on fear. Fear to contribute to a security incident, to a fraud, to a loss of money, fear to get fired.

Fear appeal is used to leverage behavioural changes as one believe the emotional reaction caused by fear will increase the likelihood of the occurrence of the appropriate, secure, behaviour. You better think twice as, like it is often the case, devil is in the details.

Fear appeal effectiveness is still a debatable question (that’s the principle of science) but mainly because it might works under some conditions. In their “Appealing to Fear: A Meta-Analysis of Fear Appeal Effectiveness and Theories” article, Tannenbaum et al. (2015) have analysed 217 articles on the subject and found few conditions making fear appeal ineffective while effects seem most apparent in women and for one-time behaviours.

However, in a review of 60 years of studies on fear appeal, Ruiter et al. (2014) concluded that coping information aimed at increasing perceptions of response effectiveness and especially self-efficacy is more important in promoting protective action than presenting threatening health information aimed at increasing risk perceptions and fear arousal”. A 2014 study of Kessels et al. using event-related brain and reaction times found that health information arousing fear causes more avoidance responses among those for whom the health threat is relevant for them.

Still, it seems there is some consensus regarding some specific conditions to be met by such communication: the communication must provide, just after the fear arousal, a solution to allow the audience to reduce this fear with a sense of self-efficacy, or, to say it simply, we must provide a simple way for our audience to fix the issue, being an easy to follow behaviour (one that doesn’t require too much psychological and physical energy). If our solution is so complex that it will (or the thought of using it) generate more stress than the feared event, our brain will likely avoid this behaviour and deny the reality of the risk (and the fear).

Latest researches in neurosciences (and more specifically in the field of neuroergonomy) provide some guidance to shape our message and solution in order to allow our audience to easily grab our communication and adopt the desired behaviour.

Like for most communication, we must avoid to saturate the working memory. What does it means? If we receive too many information at once, our brain is not able to process it at once. It is like for a lift. If there is more people trying to enter than the lift capacity, the lift is not going to move and will be stuck. It is the same for our brain. If we saturate the place where the information is stored in order to be processed (what we call the working memory).

The average span of the human’s working memory is 5 objects or, if we use Husserl’s terminology, noema. For most people, this span is between 3 and 7 objects.

But, what is an object (or noema) in that context? If I give you a phone number digit per digit (let say: 1,5,5,5,1,2,3,4,4,6,9), it will be hard for you to memorize the 11 digits of this number, each digit being an object. But, if we combine some digits together in small numbers (1, 555, 123, 44, 69), it will be easier to remember. The reason behind it being that these small numbers are also objects (noema) for our working memory and in that case, we don’t saturate it as there is only 5 objects (so, within the average memory span).

Why are the small numbers an object and not the large one? Simply because we are used to them. If you are bone in 1980, this number can become an object (as you are quite well acquainted with it) while 1256 could require 2 noema (12 and 56).

The same is true with words. Well known words (and their associated concepts) are easier to process. It is why I put multiple time the word “noema” (likely to be a new name for most readers) with the word “object” (a quite common word and clear concept) so it can be used as an “handle” to better “grasp” the new concept of “noema”. Similarly, using the metaphor of the “handle” to “grasp” a concept ease the understanding (the grasp) of the concept.

To summarize, our solutions, our expected new behaviours, must be as close as possible to something we already know in order to make it easier to grasp.

As a concrete example, if you want your user to check the validity of an email sender’s domain name (just that concept is not that easy to understand for a lot of people, so what’s on the right of the @ in an email address), you should provide a tool available in the first level of the menu or a link in the favourites website. The best thing would be to have the information integrated in the email or at a click from it.

E-commerce websites have already well integrated such concepts. They understood long ago that if you want to have a client ordering something, he must find it and be able to order it with 3 clicks or less. You maybe know the saying: “the best place to hide a body is on the second page of a Google search”. Meaning? Most people don’t go to the second page, it is a click too far.

kittenUsing pictures, drawings (simple one, keep the 3 to 7 objects rules in mind), stories, jokes help memorizing. Anything that might be relevant to the concept or totally outstanding might help too. Emotions help to memorize. If you scare people first, making them laugh or smile with your “solution” might allow memorizing it. Go kittens! (see https://www.ezonomics.com/stories/how-pictures-of-kittens-can-help-you-manage-money/).

Also, do not forget a basic principle of behaviourism… the sooner the better. If you want to foster an action, the reward must come very soon, ideally immediately, after the action. So, if you have people clicking on a link in a “test” phishing email, you may scare them by pointing their mistake but you should also immediately provide a way to avoid this experience the next time by providing a few quick tips on what they did wrong and how they should do it the next time.

Here is a nice example of a video playing just a bit on the fear and providing advices in a non-threatening, aesthetic (it matters too) and very simple way (by http://www.nomagnolia.tv/).

So, you know (a bit more) what to do now!

Is Cybersecurity a good buzzword?

For years now, Information security is a fast growing market. At least for a couple of years, the cyber security market is growing fast. Even in these times of budget cut in many sectors, quite often the cyber security department manages to negotiate an increase of its operational budget. That’s significant, isn’t it? Moreover, nowadays it becomes nearly impossible to ignore the wave of “cyber-“ words: cybercrime, cyberterrorism, cybersex or cyberbullying.

You could not have missed also the news about the CERT.be, the federal cyber emergency team (CERT used to be the Computer Emergency response team, likely less “sexy” than Cyber emergency Team) which is, according to its website, “a neutral specialist in Internet and network security” (So Cyber security is Internet and Network Security?). With the CERT.BE, you probably also read about the Belgian Center for Cyber-security (CCB). Neither could you haven’t noticed the buzz around the new Belgian Cyber Security Coallition or the 1.8 billion € allocated by the European Commission to a private-public partnership made to increase Cyber Security. In the latter, the private sector is being represented by the newly born European Cyber Security Organisation (ECSO). That’s a lot of Cyber-related news, isn’t it? Does Azimov’s vision become a reality? It sure sounds like we are in one of his Robots series book.

But what does Cyber mean? How is Cyber Security different from Information security or IT security? Which one of both is it?

According to the NIST, Cybersecurity is “The process of protecting information by preventing, detecting, and responding to attacks”. So, is it Information Security? But according to the new worldwide reference, Wikipedia, Cyber is « part of the “Internet-related prefixes added to a wide range of existing words to describe new, Internet- or computer-related flavors of existing concepts, often electronic products and services that already have a non-electronic counterpart”. So, Cyber Security should be the Internet or Computer related flavor of information security that we used to call IT security. But is it?

Because lately I’ve heard the “cyber-buzzwords” used in so many different contexts by so many person (including some executive clearly not knowing what they were talking about), I have difficulties to understand what we are talking about exactly.

Understand me well, I like the fact that our country’s leaders finally decided to address the increase of the Internet related threats more seriously. As our risk surface is drastically expanding, it is more than time to address those risks at a more global level (but we are still far from a clearly necessary worldwide cybersecurity agency, for a lot of obvious political reasons). I also like the fact that my clients’ board of directors give more focus to “cybersecurity”, whatever they think it is. At last, it provides us a momentum to raise awareness and improve the governance maturity to the necessary level.

What I don’t like in the “Cyber” fashion, is having a so important subject becoming more and more vague and focused, again, on the technological aspects. With the new buzzword came a lot of new supposed-to-be-panacea products claiming they will solve all the problems overnight (or in a few months, but at our timescale, it is the same). I heard of CISO (Chief Information Security Officer) being rebranded CCSO (Chief Cyber Security Officer).

Is it really a progress? For years we fought to have the CISO positions created at a board level in order to get out of the IT ghetto. The aim was to be also present where information security belongs: in the organizations processes and workforce. In 2016, the latest IBM security survey still attributes 60% of attacks to inside jobs. 1 employee out of 5 is ready to sell his corporate’s network credentials. The biggest weaknesses are still in the business processes and in the human being behind them. Most ethical hackers and red team members know that they don’t need a zero-day exploit to get into a target’s systems, they just need a charming smile and a couple of beer to get what they need to get in. With all the good this new Cyber buzzword brings, there is an evil: we are going back to a computer and technologically focused perception of corporate security issues. Human, processes and facilities are relegated to the second position while they still represent more than 70% of the risks. Does it make sense? Is Cyber Security an evil buzzword after all?

Few will share this article as a lot of cyber security professionnal won’t dare to challenge the marketing machine that is actually feeding them. And as I wrote, there was some good coming out of this, but it is necessary to see all the side impacts and ensure marketing people are not the one deciding where you should put your focus.

Improve and speed up your Firewall Change Requests management for free

Should you be working for a small or a very large organisation, you probably have one or many firewall to manage. If you have half a decent security governance, you probably have someone reviewing and approving any request to update rules on the firewall(s).

If you have a lot of requests to process and a complex network architecture, you might be lucky to use an automated system like Fireflow to process these change requests. if you don’t, you might struggle a bit with this process and with the enforcement of somewhat complex network security rules related to data flows between different subnets.

So, if you don’t have much money to spend in a quite expensive solution, today is your lucky day as we give you one for free (at least if you already have a Microsoft Office license).

These last months, we have developped a set of Visual Basic functions for Microsoft Excel in order to help our customers deal with the management of IP networks, FQDN, URLs, DNS and so on.

Recently, we have used these functions to create an excell sheet meant to be a form to request Firewall Change Requests (FCR) and to provide automaticaly a compliance advice with some rules of data flows exchange between subnet and some IP ports uses.

This form and the VBA functions (or the Excel function library) are available on our public GitHub repository: https://github.com/Apalala-sprl/Excel-Functions

It is quite simple to use, the only thing you need to do is to fill the two sheet with the list of your subnet and the related Network addresses (in CIDR format) and to fill the access matrix defining what is allowed from one subnet to another (see picture below). Once it is done, you can hide these sheets and give the form to any person in your organisation wanting to change or add a firewall rule.

When the requestor will encode its request in the form by giving the source and destination IP addresses, the field will automatically detrmine to which subnets the addresses belongs. Also, it will provide you the default treatment of such workflow. As the requestor will see the result as he types the request in, he will be rapidly notfied if his request is somewhat unusual or against the rules. it might reduce your workload and speed up the processing of the remaining requests.

If you have some trouble using it, don’t hesitate to contact us. If you improved it in any way, feel free to share your work with us and the rest of the community.

Are Red Team exercises close enough to reality?

A red team is a team of highly skilled professional with extended and varied skills (e.g. think about « Mission: Impossible ») acting has the opponents, challenging your plans, your controls, your security governance, your people. As a red team, we must think and behave as the « bad guys ». The goal is to emulate the critical thinking of your « official » security teams. To achieve that, we challenge all the false assumptions that makes you vulnerable. We spot all the weaknesses and find creative ways to exploit the slightest vulnerability. As will any skilled attacker do. (Luckily, they are not all that good)

The question that came to me while discussing a red team exercise with a customer was this one: Are red team exercise close enough to reality?


For sure, we are not as real as the criminal organization targeting you. We could be, as we have the skills, but we have something that makes a huge difference: ethics, rules. A red team as boundaries. Even if we take it to the most realistic level, a red team exercise will never lead us to threaten someone’s family, or its life or even to kill someone. We won’t blow a building to cover our tracks. We won’t release the ultimate virus to wipe all data. Unfortunately, criminals don’t have such boundaries.

Our client told me that the red team was not supposed to use information that would have been provided in confidence. While red teams exercises are often « black hat » exercises (meaning, we start with just a few information on the target), it is never impossible that attackers have an inside knowledge of your organization. Seriously, in real life, there is no rules. If there is enough return on investment, criminal organizations will spend a lot of money to get your crown jewels, lot of time and means. They will use any technique: blackmailing, kidnapping, bribery, infiltration. The colleague next to you could be working for a criminal organization, posing as a good guy, even as a security specialist. How would you know?

The latest incidents reported in the press involving banks or the SWIFT network mentioned takes in tens of millions: 21, 80 or even 120 millions Euro of booty for these heists. Quite a motivation isn’t it? How much will you be ready to invest to get such reward?

Cyber criminality generate approximately a trillion USD every year. 1000 billions! Law enforcements and security firms around the world reports that group of hackers and criminals are now working together to reach bigger targets with higher stakes. Imagine that an organization that get 1/1.000 of the worldwide revenue might have 1 billion USD of money for its operation. That’s a lot of cash. People get killed for less.

So, no, our red team exercises are not as real as they could be but it is likely close enough to achieve its primary goal: challenge your team and organization to make it better. Red team exercises won’t provide assurance nor will it cover all your weaknesses but it will for sure stimulate your teams to achieve their best.

Security: It’s all about trust!

In the past few days, I had a few discussions and readings that made me think about the importance of the concept of trust in security and in our life more generally speaking.

Think about it. All we do in security management, in training, in penetration testing, in patching or with monitoring is because we don’t trust our employees, our colleagues, our customers, our suppliers or our competitors. That’s why we often have 3 levels of controls, each level controlling the others so we suppose we will always have at least one person who will do the « right » thing. In our line of work, it makes sense.

But how far should we go? When do we start to trust? When do we make this leap of faith in humanity?

I worked with pretty paranoid people (for a reason, not the pathological ones) using their own operating system (Based on reviewed and modified NetBSD source code) on air gap networks. They also had RFID chip in the printer’s paper in order to trigger an alarm if you leave the facility with printed information. Other electromagnetically wiped and physically destroyed (with presses) any hard disk in end-of-life. Some requires 10 months of thorough investigation and background check before letting someone work on their systems. I worked with people having private investigators watching their security guards to ensure they were totally honest (and it wasn’t the case all the time). In the security community, you will easily found people who will not trust any software to handle their very sensitive information as they might always have a backdoor. And it is the same with hardware. And they are right to be suspicious as we found vulnerabilities and backdoors in nearly any system or application. Firmware corrupted by the government of the country manufacturing the processors or motherboards or spyware built-in from the start at the manufacturer’s government request. Routers, operating systems, firewalls, remote access applications, switches, phone equipment, and so on. There is a very long list of known backdoor, Trojan horses, spywares and so on discovered in widely used systems. You can imagine the length of the list of the one we don’t know about (yet).

If we talk about people, it’s even worse. Belgian Secret Services have published a quick card to warn travellers in some specific sensitive industry on how prevent information leakage while being out of the country. The warning is not restricted to the usual suspects (like Korea, Russia, China or USA) but also to our European “friends”. Economic espionage is written in the bylaws of many European country’s intelligence services. According to our States’ Security services, if you belong to the targeted categories of people, the question is not anymore “if” you will be victim of spies but “when”. Humans can be manipulated, blackmailed, bought, threatened, seduced, just pick one. We are no more reliable than the rest.

I know it sounds crazy, even paranoid! Unfortunately it’s just the world as it is.

So, how do we function knowing we can trust nothing and no one?

Obviously, we tend to create redundancies, to multiply the controls and the levels of control. In large organisation you may easily have more than 5 levels of control (Operational control, security, risk management, internal audit, external auditors, compliance, and so on). Even though, we still manage to have incidents. This still doesn’t answer my first question: When do we start to trust?

For me, trusting is part of the risk management process. It also meets the intelligence gathering process of evaluating your information, your sources and how reliable they are. We trust and we verify. We evaluate continuously the level of trust we can grant to our systems and our people. The higher the stakes, the higher our level of paranoia should be. Also, as usual, we must balance between the risk of doing it and the cost of not doing it. If I don’t trust my suppliers, my employees, what will be the cost for my company, my business?

What’s also important is to know that we trust. There is a clear difference between believing without knowing and believing with the consciousness of the fact that we make a leap of faith. The difference resides in the decision. I don’t believe because I do, I believe because I have decided that it is the best choice to make.

Let me take an example: in my car, if I believe that a green light for me means that cars coming from other directions will stop at the red light, without doubting that or even having the conscience it is a belief, I will never pay attention to the other cars. If I understand it is a belief, I can adjust my behaviour and check (monitor, watch) other cars to see if they are compliant with this belief (and obviously hit the brakes if they are not).

On the other hand, I should also give a little trust to my car manufacturer and have confidence in the fact the brakes will stop my car when I hit them. Else, I won’t dare to drive anymore. As always, we need to find the right balance and we need to do it consciously in order to function effectively.

So, question everything and take sound decisions, knowing that you don’t know for sure.

Quand les contrôles vous font perdre le contrôle

Cela fait quelques temps que ce constat revient tout au long de mes diverses missions: certains contrôles font plus de mal que de bien. Particulièrement, les indicateurs et systèmes de mesure en tout genre.

Quand nous mettons en place un système de gestion de la sécurité (qu’il soit conforme à la norme ISO27001 ou non), de gouvernance IT (genre CObIT) ou de gouvernance d’entreprise, arrive toujours un moment où nous devons définir des indicateurs de performance. KPIs, KGI, PI, Balance Scorecard, contrôles SMARTs et j’en passe, quel comité de direction ne vous réclame pas des indicateurs et de jolis graphiques pour égayer ses réunions?

Je suis cynique sur le sujet et je m’en explique. Ces indicateurs sont censés permettre aux dirigeants de l’entreprise de prendre des décisions éclairées pour diriger leur entreprise. Pour ce faire, les KPI doivent leur fournir des indications pertinentes par rapports aux objectifs de l’entreprise. C’est la base de la définition d’un bon indicateur. Pourtant, la pertinence de certains contrôles laisse parfois à désirer. Faire des indicateurs spécifiques, mesurable, atteignables, pertinent pour le responsable et définis dans le temps n’est franchement pas une chose aisée. Et dans cette quête à l’indicateur presque parfait, on se retrouve parfois dans l’imparfait. Et cet imparfait a de graves conséquences car définir un indicateur de performance, c’est définir les objectifs de la personne responsable de cet indicateur. Et même si nous avons défini des objectifs de plus haut niveau plus pertinent pour l’entreprise (agilité, rapidité de déploiement de nouvelles solutions, satisfaction de la clientèle), c’est ce qui sera mesuré qui aura le plus d’importance car c’est là dessus que les personnes seront évaluées (et que les bonus seront éventuellement distribués). Et voilà le piège. L’objectif individuel de certains manager n’est plus l’amélioration de la performance globale de l’entreprise et de la satisfaction des clients mais bien d’atteindre ou de dépasser ses objectifs tels que mesurés par nos indicateurs. Si ils ne sont pas alignés, vous pouvez imaginer les conséquences. Si vous n’y arrivez pas, voici quelques exemples rencontrés au cours de ces dernières années.

Anecdote 1

Un responsable de service desk ne veut pas mettre en oeuvre un système de réinitialisation automatique des mots de passe des utilisateurs car les nombreux appels reçus par son service pour ce genre de problème prend nettement moins de temps que la moyenne et fait donc baisser favorablement son indicateurs de performance sur le temps de résolution moyen tout en maintenant le nombre d’appel élevé.

Anecdote 2

un responsable réseau qui n’améliore pas son infrastructure car ses indicateurs de performance considèrent le % de bande passante utilisé (qui est dans les limites) mais ne tiens pas compte des temps de latence qui sont eux catastrophique. Et bien sûr, le temps de réponse des application étant un problème complexe qui peut dépendre du réseau tout comme des systèmes et des applications, n’ont pas de KPI sur l’équipe réseau (car ils n’ont pas le contrôle). Logique mais ennuyant car les temps de latence sont catastrophiques et cela impacte les opérations.

J’ai d’autres anecdotes sur le sujet mais le but n’est pas de faire le best of des échecs en termes de KPI mais bien d’illustrer mon propos.

Que faire alors?

Clairement, ce qui me semble le plus évident, c’est de mettre des indicateurs de haut niveau sur les performances de l’entreprise et sur la collaboration à tout le monde, avec une co-responsabilité. Tout comme dans une équipe de 400m relai, tout le monde à la responsabilité de démarrer à temps, de signaler et de passer le témoin. Tous ensemble vers un objectif commun.


Toi aussi amuses-toi avec les consignes de sécurité…

Les responsables sécurité ont rarement la réputation de joyeux lurons. En général, un « security officer » qui débarque dans une réunion est souvent perçu comme l’empêcheur de tourner en rond. Si c’est le cas, il a du travail à faire car, à mon humble avis, il devrait être perçu comme la personne qui va permettre de faire avancer l’entreprise et ses projets en les sécurisants et en les rendant pérenne.

On ne le répétera jamais assez, aucun plan de sécurité, aucune politique, n’a d’utilité si elle n’est pas communiquée, comprise et appliquée par toutes les personnes concernées. Dans la plupart des entreprises, la sécurité est l’affaire de tous. Trop fréquemment, malheureusement, les campagnes de sensibilisation à la sécurité sont peu imaginative, incompréhensible, peu attirante (pour ne pas dire moche) et certaines vont même jusqu’à favoriser des comportements opposé à ses objectifs grâce à une communication et à un message inadapté.

Les compagnies aériennes n’échappent pas à la règle. Afin d’assurer la sécurité de leurs passagers, ceux-ci sont priés d’écouter au début de chaque vol les consignes de sécurité leur rappelant de boucler leur ceinture, de ranger leurs bagage à main et de respirer dans le masque à oxygène si celui-ci venait soudainement à apparaître devant eux. Si vous avez un jour pris l’avion, vous vous en souvenez peut-être. Vous vous rappelez probablement aussi que c’est un moment légèrement barbant (surtout si vous voyagez souvent en avion). Je ne sais pas si certaines enquêtes ont montré que la plupart des passagers ne se souviennent pas de ces règles élémentaires mais il semble que certaines compagnies (ou parfois certaines hôtesses ou steward) investissent dans une communication plus agréable de leurs consignes.

Il serait intéressant d’évaluer si ces initiatives augmentent la mémorisation des règles de sécurité et surtout la concordance des comportements des passagers avec ces règles. Il est fort probable que le principal avantage de ces initiatives est de donner une meilleure image de l’entreprise, plus sympathique. Il y a cependant une leçon à tirer de cela, surtout pour les responsables sécurité qui sont perçus comme barbant (tout comme leurs règles): avec un peu de créativité, on peut changer l’image, la perception des règles et aussi, probablement, augmenter la « compliance » à celle-ci. Voici donc quelques exemples de créativité en la matière. Si les aspects de communication persuasive ne sont pas toujours pris en compte, au moins, c’est amusant et ça correspond déjà plus à l’une des règles essentielles: KISSS (Keep it Simple, Stupid & Sexy).