It seems that one of the trendy subject of the moment is the return of the fappening (a portmanteau of the words « fap », a slang term for masturbation, and the word « happening »[according to Wkikipedia]). As most sequel, the second one is not better than the first one. But my point isn’t to make a parallel with the cinema industry.
For the Fappening 2, it seems that this time two young actresses have been the new victims of hackers disclosing some intimate or sexy pictures. It is not necessary, I hope, to remind that this kind of behaviour is not only illegal as an illegal hacking but that it is aggravated by its intimate implications for the victims.
According to the journalists, there is not yet an explanation on how these pictures have been compromised. There is already a lot of advises given by newspapers and blogs on what to do and not do to avoid such situation. However, one thing that seems to be a common point to these pictures is that they have been taken by someone else than the victims themselves.
When you share information with a third party, you need to ensure that they are at least as careful as you are in their handling of your information.
In this case (and it is just a theory, I have no evidence or clue so far), if friends of these young ladies have taken pictures of them in some more intimate context, even if they trust their friends with their lives, they should ensure that their beloved friends were (also) careful and were following good practices with their phones and their « cloud » storage accounts. It is what we (should) do with our suppliers or any third party in a corporate context, and it is also the right thing to do with your friend (if you want to take intimate pictures of yourself).
In the future, Fashion stores will likely be equipped with interactive mirrors encompassing cameras and allowing them to display an image of yourself in any outfit available in the store (yes, it already exist). This will be the next IoT (Internet of Things) nightmare that will likely cause more Fappening if we don’t add a S for Security to the IoT accronym.
This week, during the CanSecWest 2017 Conference in Vancouver, British Columbia, is held the PWN2OWN™ CONTEST organized by Zero-Day Initiative (http://zerodayinitiative.com/). A team carried on an attack on Microsoft’s Edge browser allowing them to escape a VMware Workstation virtual machine in which it ran. This exploit fetched them 105 000$ of reward. On the same day, another team successfully exploited 3 vulnerabilities and succeed to perform a virtual machine escape.
I will state what is obvious to me since the rising of the hardware virtualization technologies: Virtual Machines aren’t as safe as Physical one. I feel stupid writing it as it is just a matter of fact but it seems it has not yet been accepted by a lot of system admins who are still in denial.
And VMware is not the only to blame, all the Virtualization solutions have already been breached (Xen, KVM,…) one way or another. And those ares just the known exploits. So, whoever you’re talking too, there is no way (s)he can pretend the risks are the same between a physical and a virtual machine.
Of course, there is economics upsides using virtualization and that’s why it is a matter of risk management. But when it comes to crown jewels, we might have to think twice or at least strongly insist on a physical segregation between more sensitive systems and internet facing one.
I don’t say we shouldn’t use virtual machine, I just say we must stop pretending they are as safe as physical one. It is just not true. Risk are different and we must take that into account. The wolfs can pass the fences…
This past few years, interest and budgets for ethical hackers and pentesters has grown rapidly. They gain more and more visibility (see the Belgian Cyber Security Challenge or the European Cyber Security Challenge). More important, consulting companies are recruiting young and talented hackers by the dozen those last years.
During the last decade, lot of (nor to say most) TV shows and even novels have included or even starred a hacker:
Lisbeth Salander in Millenium,
Harold Finch in Person of Interest,
Felicity Smoak in Arrow,
Elliot Alderson in Mr Robot,
Skye in Marvell’s Agent of Shields,
Christopher Pelant in Bones,
Penelope Garcia in Criminal Minds,
Luther Stickell in Mission Impossible,
and the list goes on.
Nowadays, being an (ethical) hacker is sexy, trendy and well paid. It’s no surprise that a lot of young graduates want to embrace this professional career. As such, it is a good thing as we need more skilled and talented professionals in cyber Security.
However, it might be a bit short sighted as Artificial Intelligence’s powered automated hacking systems are on our doorstep (see DARPA’s Cyber Grand Challenge and other AI powered systems in the links at the bottom of this post).
Nevertheless, that’s not really my point here. With all these young genius at work uncovering our weaknesses, we still don’t have enough talented people to fix the issues.
WE NEED MORE FIXERS!
When I talk about fixers, I don’t only mean people skilled enough to fix the vulnerabilities discovered by our code breakers but also people able to fix governance, processes, organization and people. We need professional who can make effective security awareness (meaning that will make people change their behaviour), people who can implement a flawless IT & security governance. People able to define processes preventing attacks by design. People able to define new strategies and able to implement them (or at least to make people implement them). Person who can understand in which detail the devil is hidden. Hackers just need to find one vulnerability, we have to fix them all. It is less sexy, even more complicated and there is not enough people who wants to fix the problems… but we clearly need more. So, young geniuses, when you’ll be bored of breaking things, please come to the light side and help us fix this mess.
You may have heard that the US federal Judge Thomas Rueter has ruled against Google in their refusal to seize personal emails of one of their customer to the FBI based on the fact that these data were stored in an European Data Center.
While in 2016, in a case against Microsoft, a federal judge ruled that US investigators could not force the company to hand over emails stored on a server in Europe (Dublin in that specific case).
Of course, there is much more at stake here than just access to one customer’s email. There is billions of dollars at stake here. Most companies and individuals in Europe are moving their data to the cloud. The biggest cloud services suppliers in the world are American based companies (Amazon, IBM, Google and Microsoft representing together around 50% of the market) and a large number of European companies are outsourcing their services to these vendors. However, the GDPR (the European General Data Protection Regulation, see also Wikipedia for an overview) requires a strong protection of our personal data (including our emails). As US and EU aren’t totally aligned on this matter, most European companies requires their cloud providers to store and process their data in European Data Centers in order to guarantee the European regulation will be enforced.
And now, this new ruling might jeopardize all that (or at least be the start of it). If the sole fact of having an American based company as a supplier can allow US to bypass the GDPR, would European companies still be allowed to use them to store personal data? Would we see European companies and individuals leaving Gmail, Google apps, AWS, Outlook and other related US based services for European based and owned companies? It would be a big mess… and maybe a huge opportunity for some European challengers.
Why is usability important for security management? Is it even important? Obviously for a lot of people, it is not. And that’s a problem. But what is usability anyway?
According to Wikipedia, and I find the definition pretty accurate, usability is “the ease of use and learnability of a human-made object such as a tool or device. In software engineering, usability is the degree to which software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use”.
In other words, usability is the process of designing things so they can be easily used and mastered by their end users. Usability is not just about design, it is a science. It is about making our environment optimized for our brains and our bodies. As an example, usability is when you put handles to a box so it is easier to lift. Google, the most visited website in the world is an example in terms of usability: straight to the point, one field and you get what you need in one click. It even completes the words for you, as you type. There’s a reason they are number one and it’s called user experience (UX).
Nowadays, usability, neuroergonomics and even neuromarketing are at the heart of successful designs. Whatever you are selling, you better make it easy to use and even sexy. The traditional KISS (Keep it simple and stupid) design requirement has gained an additional “S” for sexy (KISSS, Keep it simple, stupid and sexy). The article I wrote about the ineffectiveness of SPAM awareness session was also an advocacy for the use of cognitive sciences insights in order to design more effective awareness material.
Why do I care?
If you are a product manager for a startup, you are probably already aware of all the usability requirements for your products. That’s were startups win the war against the old dinosaurs: “better engineered products with better usability and even sexiness”. We all learned from the master’s success: Apple. Steve Jobs knew the rules to make something usable, less buttons. Sleek design is all about simplicity.
But if you are working in security management, or as a security project manager, or even as a security architect, it seems it is more likely that you won’t care about usability. You might think that your job is to make your company secure, not sexy. And you’re right about that. Except that, when it comes to humans, you’re probably failing (in a large part). You may think: « These stupid end-users still don’t get it. » Of course, they still manage to use weak passwords. If you force strong passwords, they write them down or they use the same everywhere. They still don’t know the security policies. They watch you’re very nice slide you showed them during the mandatory security training during their induction but the next day they are already sharing their passwords with their colleagues. Don’t speak about their inability to spot a fishing attempt! Let’s not speak about your system administrators. These fools who believe they are the kings of the realm and have left so many vulnerability open in their system that the latest vulnerability report you received was so long you couldn’t finished it in one day. Hopefully, you will make a strong point during the next security steering committee to ensure these operation guys’ boss understands he must bring them back to the righteous path.
Ring a bell? Not even a little bit? I think so.
If we believe an old saying, wisdom is being able to differentiate between what you can change and what you can’t. The goal here is to focus your energy and your efforts where it matters. So, think again about your problems. What did you do? You made awareness sessions? You wrote very thorough policies and standards? You made sure they were obliged to read them, to sign with their blood that they had read your literature and that they will abide to your rules?
Did it work? How well? Be honest, some miscreants continue to refuse to follow the rules of the holy god of security. They are probably psychopaths! Or could they be just humans? What if you could increase the probability they will read your policies. Even better, what if you could improve the odds of having them changing their behaviours and embracing your security culture? You don’t believe in Santa Claus? Me neither, but I do believe in sciences!
Neuroergonomics & neuromarketing of security!
Neuroergonomics and neuromarketing are the catchwords to refer to the use of social psychology and neuro-cognitive sciences to improve your desire to use a product and to improve your ability to handle concepts, to remember things or to become addict to some applications (think about Facebook or Twitter). If people can influence what you eat, what you drink, what you wear, what you watch or what you read, why couldn’t we use this knowledge to change your people’s attitude towards security?
Does it worth it? Well, are you already paying people to communicate, to make videos, to draw cartoons but you still have too many incidents and non-compliance? Yes, so maybe you should start investing in better designed solution and put usability as a requirement for all the projects and for all the tools or “product” security wants to sell.
If you have an Intranet, your security policies must one click away from the first page.
You must have a clear organization, a hierarchy and a search engine allowing anybody to quickly find the policy he needs or the procedure.
Policies should go straight to the point, from the reader’s point of view, as soon as the first pages.
Forget lawyers or technical talks, use common vocabulary.
Do’s and Don’t are likely more efficient than long descriptions.
Use words and situation your audience are familiar with.
Ensure your rules are translated into actions in their process and procedures.
Ensure these procedures are pragmatic and easy to read.
Use pictures, screenshots, beautifully designed templates. Make it look more like a fashion magazine than an old book.
Use positive words. Any command that can be better performed by a dead man is a bad command (example: « Don’t use short passwords« … a dead man can do that very well. Rather prefer « use long secure password« ).
Group similar things together.
Be consistent. You even better be congruent (use multiple association together) like Red + Triangle to signal Don’ts and Green + Checkbox to signal Do’s. Keep consistency with the colors (Red Negative, Green, positive).
Use consistently the same word to designate one thing. Even if synonyms can make reading less annoying, always using the same word to designate one object or concept makes it easier to understand (even more for new concepts)
Keep it as short as possible (More than 10 pages, is clearly too much)
Use symbols, signals, icons, pictures
Keep the rule of 3 in mind: if you want to explain a concept, break it down to 3 parts/steps/components, then explain the 3 sub-concepts (using 3 other steps/concepts/parts) and so on until people can understand it. You can go up to 5 « objects » but not higher.
Imbed security processes into existing processes.
If a process works, don’t fix it.
If you can streamline it, do it, even if it is not you first job. Making people life easier will facilitate the acceptance of the controls and it might even improve the attitude of people towards security.
Create links between all processes so they can benefit from each other e.g. ensure Vulnerability scans feeds the CMDB to ensure consistency. (It is supposed to be like that in a perfect world, but that’s just theory)
Forget long swim lane drawings or decision trees spanning on 3 pages, keep it short by splitting the process.
Changing behavior is something we do out of emotion, not based on rational thinking. Even if rational thoughts can lead to a change, we initiate this change only if we connect these thoughts with some emotion.
Use real concrete situation (something that happened or could happened)
They must be relevant for your audience (use scenario involving your audience, allowing them to identify themselves to the character)
Use as much as possible what they already know well (places, situations, products, application, organization, but also more personal things kids, sports, cooking, walking in the street, …)
Show them the concrete consequence on people when they don’t comply with the rules or the secure behavior (its easier to have feelings toward people than organization)
Foster self-identification to your character by using little positive details to which your audience can relate to (« Sam likes to take a coffee with his colleagues, Alice likes
Songs, rimes, jokes, kittens, anything that will be outstanding will help memorize. So use it when it is important (if you use the same trick too often, its efficiency tend to fade down)
Associate non-« sexy » items (like security rules) with more attractive one (a nice place, a smile, a cute cat picture, a beautiful woman – yes, it works for both man and woman -, a good song)
Repeat, repeat & repeat the message but change the format so it doesn’t get boring and so you can use various way to reach people.
We are all different, what works for you doesn’t absolutely work for everybody.
PS: Yes, I could make this list more « sexy » and it will likely come, but it will be in the (near) future 🙂
I have recently received SMS that are supposed to be sent by young ladies in search for a soul mate. Within the SMS, there is a link to a website with a specific number in the URL, giving access to a picture of young & pretty naked girl (no, I didn’t clicked on it, I tried it from a secured virtual workstation with all protections on and through a Tor gateway). Fortunately, this picture doesn’t seem to have any payload in it.
I called my provider to ask how I can stop this (in France, there is the number 33700 that helps you with SMS spams). According to my provider, the goal of such email is to have men replying to this sms, making their mobile communication bill a bit more expensive than usual. Except deactivating Mobile commerce option on my number, there is no way to prevent this and no place to signal such malicious SMS.
At the same time, we can understand operators are not in a hurry to solve a problem that create probably a substancial revenue as they likely have a nice percentage of margin on the operation.
Unfortunately, as SMS are cheap (and SMS servers can easily be hacked), it can also be used to distribute malicious paylod without going through the usual anti-malware that are now quite common on most email services. So, if we do nothing, this can become (if it is not yet the case) the new channel to target smarphone (and you know how much sensitive information your smartphone holds).
So, when will we have a central platform to gather information, block and prosecute such malicious and illegal (is it?) behaviour?
When you’re working in the security industry, being paranoid is kind of natural (or is it the other way around?). So, when you see how easy people, processes and technologies can be hacked, you become rapidly suspicious of anything. We all know bad things can happen and most of the time we try to mitigate the risks (without even thinking too much about it). Business as usual, so to speak. However, while I have a good idea of the risks our future is bringing to us (what makes me even less worried about my business’ future), it seems that most people don’t imagine how much danger Internet will bring to them. So here are some clues.
The new buzzword that has a lot of attention in the media lately is probably IoT: The Internet of things. According to the media, it’s IoT who allowed hackers to put websites like Amazon and Netflix on their knee for a few hours on October 21st. But that’s a mistake. Although IoT has led to some specific new technologies like Bluetooth 4.1 or ZigBee to accommodate the low consumption and the low cost requirement necessary to embed technologies in nearly all objects, it is probably a mistake to see IoT like something new or something different. As Bruce Schneier said recently in front of the US congress, we should not see this has objects with computers in it (and an Internet connection) but rather see it as computer that do things. A Tesla is a computer with wheels (and when you see how Tesla manage its updates and is manufacturing process, it is closer to the Software industry than to the car industry way of working), a smartphone is a computer with a microphone and a 4G connection, a connected fridge is a computer with an extra cooling system, and so on.
Bottom line, these connected objects are all computers and we must treat them like it. So, like for all computers when it comes to managing security, we should think about patch management, access control, hardening, change management, release management, network segregation, encryption, key management, user awareness and training and all these processes and best practices. Unfortunately, the issue is that most connected object manufacturers didn’t spend enough time and money in designing secure objects, easily upgradable, with strong and secure communication protocols. Consequently, the future is now… and we are not ready for it.
But what is our future? Let’s get a glimpse at it. In the tenth episode of the second season of “Homeland”, Nicholas Brody help terrorists to kill a political figure by giving them his pacemaker serial number, allowing them to hack it and induce a heart attack.
In another TV show, “Blacklist”, a computer genius triggers remotely the airbag of a car while driving, causing the car to crash and the death of its driver.
Is this Science-Fiction? Unfortunately, not anymore! Exploits on « smart » cars become more and more frequent. More recently, a British and a Belgian researcher have devised a wireless wounding attack on pacemakers (1). While the latter exploit need specific and rather costly hardware (3 to 4.000€), we are just one step away of having a ZigBee or BT 4.2 interface. Do you wanna kill someone with your smartphone? Don’t worry, you won’t have to wait too long.
At the same time, as other device with less deadly capabilities are spreading over the world, they provide a potential army of unsecure devices that can be used for Distributed Deny of Service attacks, like it was seen recently, but, why not, to perform parallel tasking, helping to brute force passwords, crack cryptographic keys or hide communication sources by bouncing thousand of times on these little soldiers that we provide to these hackers. Nice isn’t it? We purchase the devices that will be used against us in the near future. To be honest, for most people, including for a lot of security specialist, it is not easy to make the difference between a secure IP camera and an insecure one, simply because we don’t have time to test everything and there is no useful and relevant certification for that. So think about the number of « computers » you have at home: Your internet router, you tablet, your PC or your Mac, your smartphones, your videosurveillance camera, your printer, your TV box, your Bluray player, your « smart » TV, your alarm, your new « connected » fridge, your smart thermostat, the PSP of your kids, the IP doorbell and so on… Think about it, in your home alone, you may have more than 10 little future soldiers for the next hacker’s army. Android, iOS or IP cameras, they nearly all have exploitable vulnerabilities.
So, we have an army and we have soon legion of potential targets for the new kind of attack: DoL attacks (Denial of Life). Imagine ransomware targetting your pacemaker, large scale attack on cars to cause traffic jams or worse, new hitmans (version 3.0) changing the medication of patients in hospital, overdosing people. Just watch any episode of « Person of Interest », they were just a few inches away from the actual reality… and we are getting there.
It sounds crazy, isn’t it? As bruce Scheneier said, Internet is not that fun anymore. It’s not a game anymore. Things are getting serious and we should act accordingly. Not only at government level but also in industries and in the civilian world. We should ask our suppliers, our manufacturers to secure their devices, to make them safe AND easy to control.
In the past months, the press made public different security incidents involving companies being victims of ransomware (1)(2). Most of the time, a ransom had to be paid in Bitcoins. It’s logical as Bitcoins are much easier and cheaper to launder the money and hide the recipient than traditional money laundering circuits.
You may decide that dealing with cyber criminals is unacceptable (like for terrorists or kidnappers) but if you don’t have such policies and the amount of the ransom is lower than the overall cost of restoring your services by yourself (including manpower, business losses, public image), you may decide to pay the price. In such case, time is of the essence. In order to limit the impact and to comply with criminal’s conditions, you might have no more than 48 or even just 24 hours to pay your “lack-of-sufficient-security fine”.
But, how do you pay in Bitcoins and keep it under the radar in such a short amount of time. Imagining the time spent debating the question “do we pay or not”, the time left to actually pay will likely be very short. So, you better have your Bitcoin wallet ready and loaded or some agreement with a trusted Bitcoin exchange platform to guarantee the required discretion. Bottom line, nowadays, it might become wise to include a Bitcoin wallet in your Disaster Recovery Plan.
Whatever you’ll decide, decide now and be prepared.
Phishing and spear phishing campaigns become more and more elaborate, hence more difficult to identify and consequently more successful. Crelan’s 70 million € loss, early 2016 is a good example of the potential impact of such a successful social engineering attack.
As automated security systems are unlikely to detect and block the most elaborate and targeted attacks (as they need a significant number of similar emails to trigger their alerts), security officers are left with security awareness campaign focusing on developing skills to detect (spear) fishing attacks to try to mitigate this risk. It’s logical, it’s what security standards advise you to do but watch out you may be doing more harm than good!
One of the first mistakes in this approach is to consider awareness (or communication) as a goal. Any communication is aimed at instilling a change in its recipient(s). The aim of an awareness campaign is likely to change people’s behaviour and attitude so they pay more attention to the source of their emails, their contents and the rightfulness of what is asked to them. So basically, we should first have a measure of the current situation and aimed at a certain improvement in our “smart” metrics. The most obvious and significant one being: How many people will fall for a (spear) phishing email.
How do we usually do that? Often by a combination of training, online training, posters and “homemade” phishing campaigns to measure the exposure of the company and tickles our employees. In such case, we appeal on fear. Fear to contribute to a security incident, to a fraud, to a loss of money, fear to get fired.
Fear appeal is used to leverage behavioural changes as one believe the emotional reaction caused by fear will increase the likelihood of the occurrence of the appropriate, secure, behaviour. You better think twice as, like it is often the case, devil is in the details.
Fear appeal effectiveness is still a debatable question (that’s the principle of science) but mainly because it might works under some conditions. In their “Appealing to Fear: A Meta-Analysis of Fear Appeal Effectiveness and Theories” article, Tannenbaum et al. (2015) have analysed 217 articles on the subject and found few conditions making fear appeal ineffective while effects seem most apparent in women and for one-time behaviours.
However, in a review of 60 years of studies on fear appeal, Ruiter et al. (2014) “concluded that coping information aimed at increasing perceptions of response effectiveness and especially self-efficacy is more important in promoting protective action than presenting threatening health information aimed at increasing risk perceptions and fear arousal”. A 2014 study of Kessels et al. using event-related brain and reaction times found that health information arousing fear causes more avoidance responses among those for whom the health threat is relevant for them.
Still, it seems there is some consensus regarding some specific conditions to be met by such communication: the communication must provide, just after the fear arousal, a solution to allow the audience to reduce this fear with a sense of self-efficacy, or, to say it simply, we must provide a simple way for our audience to fix the issue, being an easy to follow behaviour (one that doesn’t require too much psychological and physical energy). If our solution is so complex that it will (or the thought of using it) generate more stress than the feared event, our brain will likely avoid this behaviour and deny the reality of the risk (and the fear).
Latest researches in neurosciences (and more specifically in the field of neuroergonomy) provide some guidance to shape our message and solution in order to allow our audience to easily grab our communication and adopt the desired behaviour.
Like for most communication, we must avoid to saturate the working memory. What does it means? If we receive too many information at once, our brain is not able to process it at once. It is like for a lift. If there is more people trying to enter than the lift capacity, the lift is not going to move and will be stuck. It is the same for our brain. If we saturate the place where the information is stored in order to be processed (what we call the working memory).
The average span of the human’s working memory is 5 objects or, if we use Husserl’s terminology, noema. For most people, this span is between 3 and 7 objects.
But, what is an object (or noema) in that context? If I give you a phone number digit per digit (let say: 1,5,5,5,1,2,3,4,4,6,9), it will be hard for you to memorize the 11 digits of this number, each digit being an object. But, if we combine some digits together in small numbers (1, 555, 123, 44, 69), it will be easier to remember. The reason behind it being that these small numbers are also objects (noema) for our working memory and in that case, we don’t saturate it as there is only 5 objects (so, within the average memory span).
Why are the small numbers an object and not the large one? Simply because we are used to them. If you are bone in 1980, this number can become an object (as you are quite well acquainted with it) while 1256 could require 2 noema (12 and 56).
The same is true with words. Well known words (and their associated concepts) are easier to process. It is why I put multiple time the word “noema” (likely to be a new name for most readers) with the word “object” (a quite common word and clear concept) so it can be used as an “handle” to better “grasp” the new concept of “noema”. Similarly, using the metaphor of the “handle” to “grasp” a concept ease the understanding (the grasp) of the concept.
To summarize, our solutions, our expected new behaviours, must be as close as possible to something we already know in order to make it easier to grasp.
As a concrete example, if you want your user to check the validity of an email sender’s domain name (just that concept is not that easy to understand for a lot of people, so what’s on the right of the @ in an email address), you should provide a tool available in the first level of the menu or a link in the favourites website. The best thing would be to have the information integrated in the email or at a click from it.
E-commerce websites have already well integrated such concepts. They understood long ago that if you want to have a client ordering something, he must find it and be able to order it with 3 clicks or less. You maybe know the saying: “the best place to hide a body is on the second page of a Google search”. Meaning? Most people don’t go to the second page, it is a click too far.
Using pictures, drawings (simple one, keep the 3 to 7 objects rules in mind), stories, jokes help memorizing. Anything that might be relevant to the concept or totally outstanding might help too. Emotions help to memorize. If you scare people first, making them laugh or smile with your “solution” might allow memorizing it. Go kittens! (see https://www.ezonomics.com/stories/how-pictures-of-kittens-can-help-you-manage-money/).
Also, do not forget a basic principle of behaviourism… the sooner the better. If you want to foster an action, the reward must come very soon, ideally immediately, after the action. So, if you have people clicking on a link in a “test” phishing email, you may scare them by pointing their mistake but you should also immediately provide a way to avoid this experience the next time by providing a few quick tips on what they did wrong and how they should do it the next time.
Here is a nice example of a video playing just a bit on the fear and providing advices in a non-threatening, aesthetic (it matters too) and very simple way (by http://www.nomagnolia.tv/).
Quant il s’agit de laisser nos enfants découvrir les merveilles qu’offre Internet, et seulement les merveilles, il existe le filtre de recherche de Google afin de limiter les résultats à des sites moralement acceptable. Seulement, vous n’avez peut-être pas envie de laisser Google surveiller l’activité de vos enfants puis vous préférer peut-être limiter l’accès depuis la machine à un moteur de recherche dédié à nos petites têtes blondes. Qwant a exaucé votre souhait.
Si vous ne connaissez pas Qwant, il s’agit d’un moteur de recherche français créé en 2013. Grâce à un investissement de 25 millions d’euro de la banque d’investissement européenne fin 2015, le moteur prend désormais de la carrure et s’ouvre à l’Europe.
En 2014, Qwant a lancé Qwant Junior (https://www.qwantjunior.com/), un moteur de recherche destiné au enfants et adolescents, leur offrant un contenu ciblé en termes de sites, d’information, d’images et d’actualité. Qwant Junior peut-être utilisé comme moteur de recherche par défaut dans la barre de votre navigateur préféré, évitant ainsi que nos enfants ne retombent par hasard sur un résultat de recherche inapproprié via Google ou Bing.
Qwant et Qwant Junior étant des produits Français, ils sont soumis à la législation Européenne relative à la protection de la vie privée et se targuent même d’être « Le moteur de recherche qui respecte ta vie privée ».
Avec le lancement de notre projet de distribution Linux pour Raspberry Pi destinée aux enfants et aux écoles (Kidnux, bientôt en ligne sur https://www.kidnux.org), nous publieront très bientôt de nouveaux liens vers des sites éducatifs et des nouvelles astuces pour sécuriser facilement et gratuitement les ordinateurs utilisés par vos enfants et entre-autres, comment prévenir la navigation vers des sites peu recommandables.