Lots of activity around radio embedded systems (Internet of things)

We all know that Internet of things is going to be the next revolution and it’s on the way, we also know that this revolution will not come from some main existing IT leaders as IBM, Microsoft, Amazon … ( I won’t say Google, because they are good player here) but from a lot of non existent companies/startups. It’s really interesting to see what is happening on crowd founding place like kickstarter (Ok, Amazon is part of it) these days : for sure we can see Internet of things in creation with a lots of connected gadget founded. But you can also found all the raw material used by the future companies to make this revolution real.

A year ago we had a lot of buzz around Raspberry PI ( and it continue to grow even if I think the reality of this environment is under its promise ) which is an interesting platform like BeagleBoard Black coming this month is. In my opinion these platforms are good to do geeky stuff but suffer of two issues : not natively wireless and consuming too much power. I assume Internet of thinks is mostly mobile and wireless : because I won’t put many cables in my house to connect these objects : I want wifi/battery stuff!

Continue reading

Infrastructure, loosing control

When I’ve just been hired in the company, 10 years ago, the infrastructure just started to be outsourced. This was a big change and complex challenge, as the company lost servers, data centers and associated skills. It has to be transformed. The governance has changed. With some distance, now, this change was not so huge as the company kept control of its infrastructure. For sure the way decisions are taken was different but at the end the company still in position to take decisions, orient strategy and modify the rules, based on its own choice.

 

What’s coming nowdays seems something different and more complex to manage, in my point if view cloud computing is a strong challenge on infrastructure control for companies. There are not only the SaaS service coming in the company on which we have no control on how the operations and the hosting is done by the service provider. And for which, due to maturity level of the offer, the quality is generally under the industrial standards.

There is also the “cloud” spirit coming, I mean, in all the mind we can go faster, go instantaneously, with no cost and so on… A a consequence, part of the company, not usually in charge of the IT operations take its own external cloud environment to do faster, cheaper, it’s so easy ! And it’s true, they are doing faster, they are doing cheaper… but they are not doing the same, they are doing lower service, lower backup, lower security … and this is working because the perimeter is limited to a couple of applications they are managing, not thousands . But as a consequence a new piece of the infrastructure is going out of control of the infra teams.

In fact, compare to what can be provided today, the old infrastructure standards are too complex, too long. In the mind of managers and project managers we must be agile by looking to external providers, cloud, IaaS, PaaS, SaaS providers to get the solution. This have for consequence that we should look outside to manage POC and later, why not development. In a way, it’s not false, we have to be more agile particularly in these environments, but the truth is that we keep loosing control on infrastructure and multiply the number of service provider and the diversity of the solutions.

For some other part of business, we directly decide to split it out of the standard IT department to let them progress more efficiently, alone, far away of the industrial standard. Once again, it’s true : it’s more efficient in a short term point of view : less quality, agile / recent technologies. But as a consequence, this split part loose the industrial skills we can have and the mother company loose the opportunity of learning about efficient IT. In a longer terms, it also means that in the future, the main IT department will have to manage some non standard technologies from the later merge of what has been split in the past.

By looking on these facts, growing, month after month I really think Infrastructure teams are actually loosing control. We are already consider as slow, expensive, because we are managing complexity, technology diversity, because we are driven by contract with cost constraints, because we are constantly challenged on SLA and because we are inheriting a weight silo organization. The truth is, all of this is due to the age of this kind of organization, we can also name it maturity. I’m pretty sure all the cloud like solution will become more expensive, less agile after each progress (security, availability…). Don’t think I’m against the cloud solution, I’m fully convinced of the advantage of these technologies, I just consider them with reserve due to the actual maturity and think that all these technologies can take place in the current environment and help them to change the current silo organization to a more agile one. I’m just afraid by the future impact of this loss of control and the future multiplicity of actors we will have to manage. If in parallel no investment are made on the current infrastructure because they are made on outsourcing to the external cloud, I’m pretty sure we are going to hit the wall. This means being in front of availability problem we can’t manage, data loss we can’t recover, service provider closing we can’t replace… when in parallel we will be challenge more and more on managing an internal infra environment where only old, fat, really complex applications will be kept.

I’m also convinced that this situation is due to the new arrival of these technologies, coming fast and growing faster than our infra organizations can adopt them and we will soon react. I hope.

 

How to choose a SaaS service

More and more, companies are going to SaaS (Software as a Service) solutions. They offer a quick and generally less expensive solutions compared to standard software approach. This has been made possible by a standardization of the solution, a mutualization of the hardware resources and a pay as you growth economic model. In a business point of view it is also a nice way to bypass the terrible internal IS and IT services to obtain tomorrow what they dreamed to have since a long time.

Beyond this nice view, even if SaaS sellers expect you to trust them, they only provide services, you should never forget that there is under an hadware and It Ops layer your IS/IT team must review before contracting.

This article, follow a brainstorming we had on this topic and gives some area we should investigate before contract anything.

Continue reading

Oracle DB performance over low latency networks

Here is an interesting article on the impact of the latency on some usual Oracle SQL statement over Local, LAN and WAN networks.

http://markbairden.blogspot.fr/2012/03/database-performance-measuring-effects.html

Wan latency uses is 12ms which is small for this type of connection. Impact is huge with a factor of 10. To be taken into account in your designs !

Thank you Mark for this article !

New low consumption Atom devices

Not really a news, but an interesting update on two new devices sold since march 4th, Atom family have, at least, two new Core, the first one is the N2650, the second one the N2850, they are two dual core, 4 thread CPU for netbook with a really low TDP.

The N2850 offers a 2GHz system with a 10W TDP, the N2650 offers a 1.7GHz system with a 3.6W TDP.

This has to be compared with my current reference (making no sens for you … but …) the D525 offering 1.8GHz for 13W. More over the IGP has been push from 400MHz to 640MHz on the N2850.

This configuration sound really interesting to build small and fanless machine… now we need to see the first box including these chips.

See Atom family on wikipedia : http://fr.wikipedia.org/wiki/Intel_Atom

Why behind concept PirateBox is just an experiment ?

After a couple of days hacking PirateBox based on MR3020 i’m really happy to give this concept life, but behind the announcements like “share freely for less than 40€ with your neighborhood” the reality differs.

The first main thing is that 40€ is the price, port excluded to get the MR3020 router, then you must add a storage (25€ for 16GB), add a battery if you want to be mobile (50-100€). Which made a big difference as it becomes quite as expensive as a tablet PC where you can install the software and be mobile with higher storage, higher mobility and much more capabilities.  So, no magic around this.

Ok, i’ll tell me you can reuse an old external hard drive and not being mobile, just switching it up at home… and you’re right, this is a good capability for a reasonable price. But, once again, you’ll have to deal with the low Wifi power offer by this low cost router. I mean you’ll be able to exchange with your direct neighborhood : the one you cross daily in the stairs. Why not, it sounds good …

That last argument I also would like to share to finish to be a party killer, is about the Hadopi stuff and piracy freedom feeling…. Are you mad guys ? As much as I know, getting address of pirate is not so easy when you are downloading on Internet, even if it has been simplified, you’re one on million and the chance to be kept is small, then you will be identified more as a consumer  than a provider. To identify the owner of the PirateBox is is quite easy by triangulation, then you just need to request a judge to get the needed paper to catch a provider..

In my opinion, PirateBox has to be mobile to not being kept or need to not be owned and hosted on public domain and sourced on solar power. (Like the USB Stick embedded in walls, but for more money) Which would be a great and interesting thing but requires investment.

For all these reasons, I love the idea, I like the product in a technological point of view, interesting and easy hack, but I do not consider it as something really ready for mass. It’s hype, it’s geek, not consumers.

How to improve PirateBox ?

PirateBox is an interesting concept, but it had a lot of limitations based on the fact that the distance covered by a wifi connection is really limited. The mobile devices, able to cover a larger distance also have limitation due to the time needed to transfer any content.

In fact, you can’t imagine to grab content from someone you cross on the street because you might be out of signal before finished to transfer your file. More over due to the actual memory size of the portable devices you can’t imagine to share a lot of stuff on it.

So the idea is great, and the future could change all what I just said. But, today, the reality is that this system is not really usable as a real anonymous and unlimited sharing system.

To improve the system, I would imagine a network of PirateBox, this idea requires to have a larger number of devices but it would allow to share a large amount of data even on restricted memory cards. The idea is that each PirateBox would have a second wifi adapter to connect to another PirateBox. To get the list of its files and share it across this point to point connection. As each of the Piratebox is connecting to a second one we could imagine to build, dynamically a large piratebox network.

The file requests and data transfers would goes from a point to another point without keeping trace of these transfer out of the point to point exchange.

I assume there are some interesting research around this idea as the system has to build a dynamic network, avoiding cycles and optimizing the communications to make the network larger as possible, using a non centralized system to manage all of this.

Anyone to start developing a such stuff ?

Ainsi Fon fon fon …

L’histoire en soit à peu d’importance d’autant que rien n’en prouve la véracité. Cependant, d’un point de vu technique la situation est totalement probable. Après avoir creusé le sujet de FON avec le peu d’information disponible quant à la sécurité, voici ce que l’on peut en dire.

FON est un systeme de HotSpot mondial qui permet d’ouvrir votre réseau aux autres, je ne vais pas détailler le fonctionnel, mais d’un point de vue technique, le systeme (une AP) gère deux réseaux Wifi, le premier privé, protégé par WPA dans lequel vous mettez vos propres machine et un second réseau, lui public sans sécurité dans lequel viennent se connecter les inconnus de passage. L’Internet est ensuite accessible comme sur tous les hotspot : tant que vous n’êtes pas authentifié vous n’accedez qu’a une page de login et ensuite vous avez un accès HTTP/HTTPS classique.
Le point important est qu’une fois authentifié, vous accédez à l’Internet exactement de la même façon qu’un ordinateur du reseau privé (hors mis que bcp de ports sont filtrés), c’est à dire que l’adresse IP des utilisateurs hotspot se trouvera être la même adresse IP que celle du propriétaire de la ligne. Du point de vue des traces laissées sur Internet, c’est donc la personne qui met à disposition le point d’accès qui se retrouve responsable des action faites par des inconnus.
Le système semble logguer les actions de connexion/deconnexion mais pas le détail de ce qui est fait (après le login, les échanges ne sont plus centralisés et les box ne peuvent stocker un historique important, enfin du point de vue provider il n’est pas possible de dissocier les flux).Ainsi, s’il est possible de prouver qu’une autre personne utilisait Internet au moment de faits, il n’est pas possible de prouver qui était l’auteur de ces faits. En conséquence le système est donc dangereux pour la personne mettant en ?uvre le service. Prenons un exemple:
Une personnes se connecte à des sites pédophiles ou terroristes en utilisant cet accès, (cette personne aura d’ailleurs tout intérêt à utiliser ce type d’accès puisqu’il permet assez facilement d’y être anonyme (point à vérifier)), elle laisse des traces sur les serveur accédés. Dans le cadre d’une enquête de police ces traces sont recueillies et un beau matin à 6:00 vous pouvez voir débarquer un peu plus de monde que prévu à cette heure-ci. Même innocent, cette situation sera loin d’être agréable je l’imagine.

J’adore le principe ouvert de FON, mais d’un point de vue technique, je trouve que les capacité de log sont un peu légère, il est certes difficile de totalement protéger la personne qui partage sont adresse IP, mais je pense que le niveau de log inscrit dans le système devrait être très détaillé avec une rétention de plusieurs années. Voir, le recours à un proxy FON serait une solution bien plus sécurisante pour celui qui partage, ainsi ce n’est pas son IP mais celle du proxy qui serait visible des services de police, qui se tournerait vers FON pour obtenir des informations. Ceux-ci seraient donc alerté dès le début de la procédure de l’innocence présumée forte de celui qui a prété son acces.