Sciences et technologies
3.7K views | +3 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by Schallum Pierre
Scoop.it!

#Licensed Un outil GitHub pour contrer les violations de licence de logiciel open source - Le Monde Informatique

#Licensed Un outil GitHub pour contrer les violations de licence de logiciel open source - Le Monde Informatique | Sciences et technologies | Scoop.it

Développement et Tests : Avec License, GitHub livre un outil pour éviter les violations de licence dans les logiciels open source

GitHub a livré en open source son outil Licensed, une bibliothèque gem de Ruby qui met en cache et vérifie l'état de licence des dépendances dans les référentiels Git. Licensed a permis aux ingénieurs de GitHub qui utilisent des logiciels open source d’identifier des problèmes potentiels avec les dépendances de licence dès le commencement du cycle de développement. L'outil signale toute dépendance nécessitant une révision. Selon la définition de GitHub, une dépendance est un progiciel externe utilisé dans une application et une source de dépendance, une classe qui peut énumérer les dépendances d'une application.

Voici comment fonctionne l'outil Licensed de GitHub et ce qu’il fait :

- Licensed met en cache et vérifie les métadonnées des licences pour trouver des dépendances. Ces dépendances sont détectées pour différents types de langages et gestionnaires de paquets dans les projets d’un référentiel.

- Un fichier de configuration détermine où et comment énumérer les dépendances. Celles-ci sont énumérées pour chaque chemin source de la configuration.

- Lorsqu'il détecte une dépendance, l'outil trouve l'emplacement de la source dans un environnement local et extrait les métadonnées pertinentes.

- Il utilise Licencied Ruby Gem (https://github.com/benbalter/licensee) pour déterminer la licence de chaque dépendance et trouver le texte de la licence.

Vérification continue

En stockant les données de dépendance dans un référentiel de contrôle de la source, les données peuvent être vérifiées au cours du workflow de développement. Chaque fois que les dépendances changent, il peut s’avérer nécessaire de mettre à jour les licences, de façon à maintenir à jour les données de licence. Le référentiel de contrôle de la source fournit également un historique des changements au niveau des dépendances. GitHub prévoit d’améliorer Licensed. Il souhaite en particulier assouplir son interaction avec les workflows des développeurs et lors de l’ajout de nouvelles sources de dépendances. GitHub prévoit également d’ajouter de nouvelles sources de dépendances. Le référentiel précise que l'outil Licensed peut découvrir et documenter des problèmes de licence évidents dès le début, mais qu'il ne remplace pas la révision des dépendances par une personne physique et qu'il ne peut pas être considéré non plus comme une solution complète de contrôle de licence open source. L'outil Licensed est téléchargeable sur le référentiel GitHub (https://github.com/github/licensed). Il est fourni avec des instructions d'installation.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Anarchie, cypherpunk et liberté : les racines philosophiques du bitcoin - Extrait du rapport Bitcoin, Totem et tabou

Anarchie, cypherpunk et liberté : les racines philosophiques du bitcoin - Extrait du rapport Bitcoin, Totem et tabou | Sciences et technologies | Scoop.it

Le bitcoin n’est pas qu’une réaction à la crise de 2008. Il est une réponse à un questionnement philosophique et politique entamé dès les années 1990.

 Par Yorick de Mombynes.

Extrait du rapport Bitcoin, Totem et tabou (https://www.institutsapiens.fr/wp-content/uploads/2018/02/Note-Bitcoin-07-f%C3%A9vrier.pdf) , rédigé avec Gonzague Grandval et publié en février 2018 par l’Institut Sapiens (https://www.institutsapiens.fr/).

Le fait que Bitcoin ait été conçu en 2008 a laissé penser qu’il était une réaction à la grande crise financière mondiale de la même année. Il est vrai que Satoshi Nakamoto a inséré, dans le premier bloc de la blockchain Bitcoin, un message faisant référence à un nouveau sauvetage des banques par les États : le titre d’un article du Times du 3 janvier 2009, « Chancellor on brink of second bailout for banks ». L’objectif de ce message était avant tout de prouver que la blockchain Bitcoin avait véritablement démarré le 3 janvier 2009, mais la concordance de la naissance de cette technologie avec la crise financière a souvent servi d’explication originelle.

Il faut pourtant remonter plusieurs décennies en arrière pour comprendre les origines véritables de cette technologie et de sa monnaie éponyme. Dans les années 1990, alors qu’internet émerge véritablement pour le grand public, un groupe de mathématiciens, cryptographes, informaticiens et hackers se forme dans le but de militer pour la protection de la vie privée, en particulier par l’usage de la cryptographie. Les « cypherpunks », parmi lesquels on retrouve les créateurs de Wikileaks, militent pour l’usage d’outils de chiffrement afin d’écarter les risques grandissants d’intrusion des États ou de sociétés privées dans la vie privée des individus.

Les cypherpunks

Timothy May est l’un des contributeurs majeurs de la mailing-list Cypherpunk, sur laquelle il diffusa en 1992 Le Manifeste crypto-anarchiste (http://nakamotoinstitute.org/crypto-anarchist-manifesto/) rédigé en 1988, texte fondateur et visionnaire à tendance libertaire, qui décrit brillamment la révolution numérique que nous vivons actuellement. « Les cypherpunks estiment que la vie privée est une bonne chose, écrit Tim May, et souhaitent qu’il y en ai davantage. Ils reconnaissent que ceux qui veulent une vie privée doivent s’en donner les moyens et ne pas simplement attendre des gouvernements, des entreprises ou d’autres organisations immenses et sans visage, qu’ils leur accordent une vie privée par bienveillance. Les cypherpunks savent que les peuples ont dû se créer leur propre vie privée pendant des siècles, avec des murmures, des enveloppes, des portes fermées et des courriers secrets ». Comme l’explique Philippe Rodriguez dans La Révolution blockchain (Dunod, 2017), un an plus tard, Eric Hughes, l’un des membres du petit groupe désormais baptisé cypherpunk, publie un manifeste crypto-anarchiste, A Cypherpunk Manifesto (http://nakamotoinstitute.org/crypto-anarchist-manifesto/). Il y reprend, à son tour, l’idée que la vie privée doit être préservée des possibles dérives du Net et que le système d’échanges anonymes doit être généralisé.

Il y appelle ainsi tous les cypherpunks à écrire des programmes de chiffrement pour se prémunir des écoutes opérées illégalement par les gouvernements ou les entreprises. « La vie privée est nécessaire dans une société ouverte à l’âge électronique, écrivait-il de manière prophétique. La vie privée n’est pas toutefois un secret. Une affaire privée est quelque chose dont on ne souhaite pas que tout le monde soit au courant, alors qu’une affaire secrète est quelque chose dont personne ne doit être au courant. La vie privée est donc le pouvoir de sélectionner ceux auxquels le monde sera révélé ». Les textes majeurs de ce courant sont tous accessibles sur le site du Nakamoto Institute. Fondées sur des algorithmes publics, les solutions de chiffrement les plus célèbres sont nées dans les années 1990 et n’ont cessé de se développer et de gagner en légitimité. Parvenues à un stade de maturité, mais encore insuffisamment adoptées du grand public, ces solutions peuvent permettre à quiconque de protéger ses correspondances et de signer électroniquement ses échanges, données ou documents. Une de leurs forces fondamentales repose sur leur indépendance de toute entité centrale. La confiance établie entre deux personnes ne repose que sur les mathématiques, ce qui permet de s’émanciper de tout tiers de confiance, étatique ou privé. Parallèlement, Internet a vu apparaitre de multiples innovations sur son réseau, tels que le web, l’email, la voix sur IP. Mises à la disposition de tous, ces technologies fondent en grande partie nos usages numériques et sont toutes basées sur des protocoles technologiques libres. Mais, alors que les services de paiement électronique sont apparus dans les années 80 avec les cartes bancaires à piste magnétique puis à puce, aucune technologie libre n’est quant à elle venue spécifiquement offrir d’alternative sur Internet à ces outils. L’enjeu était d’ailleurs moins de créer un système de paiement libre que de concevoir un véritable système monétaire fonctionnant sur Internet. Un système de paiement ne peut être réellement libre tant que la monnaie qui y circule est contrôlée par les États et les banques. Les services tels que PayPal, même s’ils ont tenté de diffuser une image alternative par rapport au modèle des banques, interfèrent parfois dans la livraison des paiements. C’est ainsi qu’en 2010, Wikileaks a vu sa campagne de collecte de dons subitement stoppée par PayPal, puis Visa et Mastercard, ainsi que par les banques. Les initiatives de systèmes monétaires et de paiements autonomes n’ont pas manqué depuis les années 2000, avec des propositions presque abouties comme B-money ou Bit Gold. Conçu par Nick Szabo en 1998, Bit Gold est une initiative de monnaie numérique décentralisée dont le fonctionnement est extrêmement proche de Bitcoin, mais qui n’a pas réussi à résoudre parfaitement le classique problème de la double dépense (Nick Szabo reste l’une des figures les plus écoutées de la communauté Bitcoin et est même souvent présenté comme le vrai Nakamoto).

Satoshi Nakamoto

Il fallut attendre la proposition technique de Satoshi Nakamoto en 2008 pour finalement découvrir le premier système monétaire libre et autonome. Bénéficiant du contexte de la crise financière de 2008 qui a renforcé le sentiment de défiance vis-à-vis du système bancaire et du rôle des États dans l’emballement de la création monétaire et du crédit, Bitcoin s’est imposé comme une solution inventive et totalement révolutionnaire par rapport aux environnements monétaires et financiers classiques. La proposition de Nakamoto de 2008, Bitcoin : A Peer-to-Peer Electronic Cash System (https://bitcoin.fr/bitcoin-explique-par-son-inventeur/), propose un protocole technique permettant de créer une monnaie aux caractéristiques radicalement nouvelles et le réseau d’échange autorisant son transfert de pair à pair, sans intervention d’un tiers de confiance. Nakamoto (que ce pseudonyme représente une ou plusieurs personnes) a décidé de rester anonyme. Après avoir conçu et contribué à lancer le protocole, il s’est retiré publiquement en 2011, peu de temps après une réunion entre Gavin Andresen (son dauphin à l’époque) et la CIA. La majorité des auteurs de solutions cryptographiques a eu maille à partir avec les agences de renseignements des grands États : ce fait permet à lui seul de comprendre ce choix. Le caractère universel de Bitcoin, la taille de sa communauté et l’absence d’autorité centrale et morale, rendent cette technologie et ce réseau bien plus libres et indépendants que ceux animés par des leaders qui font souvent figure de gourou. Ces créateurs identifiés représentent un point de vulnérabilité. Bitcoin n’est dirigé que par le consensus mathématique, sans dogme, même si les évolutions techniques réclament des manœuvres politiques de grande ampleur, et même si quelques personnalités comme les cryptographes et développeurs Gregory Maxwell, Nick Szabo ou Adam Back sont des références pour une grande partie de la communauté Bitcoin. Cette invention est née sous forme d’un logiciel libre, fonctionne sur un réseau libre et est opérée selon des règles transparentes. Elle consacre l’émergence de la première monnaie numérique libre, autonome et résistante à la censure.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

#Blockcert Le MIT utilise la blockchain pour certifier les diplômes obtenus par les étudiants

#Blockcert Le MIT utilise la blockchain pour certifier les diplômes obtenus par les étudiants | Sciences et technologies | Scoop.it

https://credentials.mit.edu/ https://www.blockcerts.org/

Un candidat, ça trompe énormément : d’après une étude du Florian Mantione Institut, 75% d’entre eux mentent aux employeurs sur leur CV. Et selon Robert Half, 47% des DG ont déjà exclu un candidat d’un poste après avoir découvert « des informations fausses ou exagérées dans leur candidature ». Pour éviter aux recruteurs ce long travail de fact checking à chaque CV reçu, des universités expérimentent de nouvelles techniques pour certifier les diplômes. C’est notamment le cas du MIT, qui vient d’enregistrer dans la blockchain tous les diplômes obtenus par les étudiants en février.


Des diplômes certifiés grâce à la blockchain

Les bitcoins ne sont que la partie visible de l’iceberg de la blockchain. De nouvelles applications de cette technologie apparaissent tous les jours. Celle pensée par le MIT permet de répondre à une forte demande des recruteurs et des candidats honnêtes. Le fonctionnement est plutôt simple. Le MIT associe un diplôme (qui contient des images, du texte et une signature) avec l’identifiant unique des étudiants diplômés. Ces données sont cryptées, grâce à une clé privée du MIT, et stockées dans la blockchain. Les jeunes diplômés peuvent ensuite transmettre leur diplôme aux recruteurs, qui peuvent vérifier l’authenticité des informations sur credentials.mit.edu. Les 100 premiers certificats ont été publiés en octobre 2017, dans le cadre d’un programme pilote. La certification grâce à la blockchain est désormais généralisée, pour tous les diplômés du mois de février.


Un processus rapide et sécurisé pour les recruteurs

L’ensemble du process est assuré par la technologie opensource Blockcerts et son application éponyme. L’application permet de générer les clés nécessaires pour certifier la possession du diplôme par un individu donné. Les jeunes diplômés se connectent, via l’application, aux serveurs du MIT pour s’authentifier et permettre à l’institut de leur attribuer les diplômes qu’ils ont réellement obtenus. Le fait que les diplômes soient dans la blockchain assure leur authenticité, bien plus qu’au format papier ou qu’au format numérique « simple », relativement facile à falsifier grâce aux logiciels du marché. Lorsque le MIT ajoute un diplôme obtenu par un étudiant dans la blockchain, cette information ne peut plus être modifiée. Autre avantage, la rapidité : les recruteurs n’ont pas à comparer un diplôme authentique avec le diplôme transmis par les candidats. Le service de vérification leur certifie automatiquement l’association entre une personne et un diplôme, grâce à un simple lien (ou un fichier) transmis par le candidat. L’outil vérifie instantanément que les informations transmises sont identiques à celles stockées sur la blockchain. Source : Learning Machine


Quel impact sur les professionnels du secteur ?

La certification des diplômes par le MIT grâce à la blockchain est intéressante et présentée en détail sur le site de l’entreprise Learning Machine, qui a conçu le processus. Learning Machine travaille sur divers projets d’application de la blockchain liés à la certification décentralisée des informations. Les possibilités offertes par cette technologie sont nombreuses et impacteront potentiellement de nombreuses entreprises et de nombreux professionnels (dans ce cas précis, on pense notamment aux notaires). Comme le souligne Henry Williams sur le Wall Street Journal, l’application de la blockchain sur de nombreux marchés risque d’être freinée par certains acteurs privés et les pouvoirs publics. Tout dépendra de la réactivité du législateur et de l’influence des lobbys concernés.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

BLOCKCHAIN TRAÇABILITÉ produits aliments

BLOCKCHAIN TRAÇABILITÉ produits aliments | Sciences et technologies | Scoop.it

Blockchain traçabilité est un service de traçabilité reposant sur la technologie blockchain qui permet de  vérifier l'authenticité et l’origine.


Les Avantages

Traçabilité L’ identification des parties prenantes d’une chaîne logistique inscrite à chaque étape du processus de la fabrication jusqu’à son lieu de vente.

Identification Les données de production, les composants et les lieux de stockage, date de production, données sur les intrants ou produits utilisés.

Sécurité sanitaire Les diagnostics plus rapides sur les sources de contamination, rappels de produits immédiats et exhaustifs dans tous les circuits de distribution.

Lutte contre la fraude La blockchain permet un horodatage et une transparence du suivi, sans qu’une entité ne puisse unilatéralement modifier ou supprimer d’informations.

Traceprod solution blockchain traçabilité Blockchain traçabilité des produits La traçabilité suffisante pour connaître la composition d'un produit tout au long de sa chaîne de production, de transformation et de distribution.

blockchain traçabilité Blockchain traçabilité des produits L'application des principes de traçabilité à la filière de production, afin d'atteindre des objectifs de sécurité. blockchain traçabilité Blockchain traçabilité des produits La technologie Blockchain a un impact profond sur l’économie en générant d’importants gains de productivité, de sécurité et d’efficacité. Une progression dans la confiance La blockchain ajoute un niveau de sécurité supplémentaire pour l'industrie. Apporter une transparence La blockchain améliore la transparence de la chaîne d'approvisionnement pour tous les types de produits. Pour les consommateurs, cela signifie qu’il sera en mesure de scanner un code sur un produit et de savoir exactement où il a été produit, stocké, vendu. Apporter la sécurité La technologie blockchain renforce la transparence et permet une traçabilité complète.Pour les revendeurs, si un produit dangereux arrive d'une façon ou d'une autre sur les lieux de ventes, il sera identifier et retirer rapidement. Permettre l’identification La blockchain est un registre numérique décentralisé, immuable et vérifiable, visible par tous. Toutes transactions sont enregistrée dans ce registre. Prévenir la fraude Les éléments ne peuvent être modifiés sans la connaissance et l'approbation de tous ceux qui ont fourni les données. La blockchain consiste en des enregistrements numériques horodatés, qui peuvent être vérifié par d'autres dans la chaîne.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

2018 Hyperledger Global Forum Announced: December 12-15, 2018 Basel Congress Center Basel, Switzerland #HyperledgerBlockchainTechnologies

2018 Hyperledger Global Forum Announced:  December 12-15, 2018 Basel Congress Center Basel, Switzerland #HyperledgerBlockchainTechnologies | Sciences et technologies | Scoop.it
Developers, vendors, enterprise end-users and enthusiasts of Hyperledger blockchain technologies to converge in Basel, Switzerland

SAN FRANCISCO, January 23, 2018 – Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, announced today the inaugural 2018 Hyperledger Global Forum, which will take place December 12-15 in Basel, Switzerland at the Congress Center Basel. The 2018 Hyperledger Global Forum will convene the global enterprise blockchain community to advance these critical technologies. The agenda will comprise of both enterprise and technical tracks covering a mix of topics including blockchain in the enterprise, distributed ledger and smart contracts 101, roadmaps for Hyperledger projects, industry keynotes and use cases in development. There will also be social networking for the community to bond, and hacking activities with mentors to help facilitate software development collaboration and knowledge sharing to bring developers up the learning curve. “This year’s Global Forum will be the premier event to collaborate and better understand Hyperledger blockchain technologies, real use cases and production deployment challenges facing enterprises today,” said Brian Behlendorf, Executive Director, Hyperledger. “For anyone still wrestling with how blockchain will transform businesses processes, and where their industry fits in, this is the perfect opportunity to learn more.” Open to members and non-members alike, attendees will have the chance to talk directly with Hyperledger project maintainers and the Technical Steering Committee, collaborate with other organizations on ideas that will directly impact the future of Hyperledger, and promote their work among the communities. A call for papers, keynote speakers and the conference schedule will be announced this summer. For more information about the 2018 Hyperledger Global Forum, please visit: https://events.linuxfoundation.org/events/hyperledger-global-forum-2018/ Help us spread the word! Click the following links to Tweet and share on your social networks using #HyperledgerGlobalForum. Inaugural 2018 #HyperledgerGlobalForum announced! Learn more about the event here: http://bit.ly/2Dkpwby Click to tweet: https://ctt.ec/LamTf Developers, enterprise end-users & enthusiasts of Hyperledger blockchain technologies are headed to Europe in December for the Inaugural #HyperledgerGlobalForum. Learn more: http://bit.ly/2Dkpwby Click to tweet: https://ctt.ec/cZl68 Don’t miss the Inaugural 2018 #HyperledgerGlobalForum happening Dec 12-15 in Basel, Switzerland. More details: http://bit.ly/2Dkpwby Click to tweet: https://ctt.ec/HKkRc About Hyperledger Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology. The Linux Foundation hosts Hyperledger under the Foundation. To learn more, visit: https://www.hyperledger.org/.
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Enigma: Decentralized Computation Platform with Guaranteed Privacy By Guy Zyskind, Oz Nathan & Alex ’Sandy’ Pentland #Blockchain #EthicsByDesign

Enigma: Decentralized Computation Platform with Guaranteed Privacy By Guy Zyskind, Oz Nathan & Alex ’Sandy’ Pentland #Blockchain #EthicsByDesign | Sciences et technologies | Scoop.it
https://www.enigma.co/
A peer-to-peer network, enabling different parties to jointly store and run compu- tations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modi- fied distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize oper- ation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy

1 Motivation
 Since early human history, centralization has been a major competitive advantage. Societies with centralized governance were able to develop more advanced technology, accumulate more resources and increase their population faster [1]. As societies evolved, the negative effects of centralization of power were revealed: corruption, inequality, preservation of the status quo and abuse of power. As it turns out, some separation of powers [2] is necessary. In modern times, we strive to find a balance between the models, maximizing output and efficiency with centralized control, guarded by checks and balances of decentralized governance. The original narrative of the web is one of radical decentralization and freedom[3]. During the last decade, the web’s incredible growth was coupled with increased centralization. Few large compa- nies now own important junctures of the web, and consequently a lot of the data created on the web. The lack of transparency and control over these organizations reveals the negative aspects of centralization once again: manipulation [4], surveillance [5], and frequent data breaches [6]. Bitcoin [9] and other blockchains [10] (e.g., Ethereum) promise a new future. Internet applications can now be built with a decentralized architecture, where no single party has absolute power and control. The public nature of the blockchain guarantees transparency over how applications work and leaves an irrefutable record of activities, providing strong incentives for honest behavior. Bitcoin the currency was the first such application, initiating a new paradigm to the web. The intense verification and public nature of the blockchain limits potential use cases, however. Modern applications use huge amounts of data, and run extensive analysis on that data. This re- striction means that only fiduciary code can run on the blockchain [7]. The problem is, much of the most sensitive parts of modern applications require heavy processing on private data. In their current design, blockchains cannot handle privacy at all. Furthermore, they are not well-suited for heavy computations. Their public nature means private data would flow through every full node on the blockchain, fully exposed
There is a strange contradiction in this setup. The most sensitive, private data can only be stored and processed in the centralized, less transparent and insecure model. We have seen this paradigm lead to catastrophic data leaks and the systematic lack of privacy we are currently forced to accept in our online lives.
 2 Enigma
 Enigma is a decentralized computation platform with guaranteed privacy. Our goal is to enable developers to build ’privacy by design’, end-to-end decentralized applications, without a trusted third party. Enigma is private . Using secure multi-party computation ( sMPC or MPC ), data queries are com- puted in a distributed way, without a trusted third party. Data is split between different nodes, and they compute functions together without leaking information to other nodes. Specifically, no single party ever has access to data in its entirety; instead, every party has a meaningless (i.e., seemingly random) piece of it. Enigma is scalable . Unlike blockchains, computations and data storage are not replicated by every node in the network. Only a small subset perform each computation over different parts of the data. The decreased redundancy in storage and computations enables more demanding computations. The key new utility Enigma brings to the table is the ability to run computations on data, without having access to the raw data itself. For example, a group of people can provide access to their salary, and together compute the average wage of the group. Each participant learns their relative position in the group, but learns nothing about other members’ salaries. It should be made clear that this is only a motivating example. In practice, any program can be securely evaluated while maintaining the inputs a secret. Today, sharing data is an irreversible process; once it is sent, there is no way to take it back or limit how it is used. Allowing access to data for secure computations is reversible and controllable, since no one but the original data owner(s) ever see the raw data. This presents a fundamental change in current approaches to data analysis. 3 Design overview Enigma is designed to connect to an existing blockchain and off-load private and intensive compu- tations to an off-chain network. All transactions are facilitated by the blockchain, which enforces access-control based on digital signatures and programmable permissions. Code is executed both on the blockchain (public parts) and on Enigma (private or computationally intensive parts). Enigma’s execution ensures both privacy and correctness , whereas a blockchain alone can only ensure the latter. Proofs of correct execution are stored on the blockchain and can be audited. We supply a scripting language for designing end-to-end decentralized applications using private contracts , which are a more powerful variation of smart contracts that can handle private information (i.e., their state is not strictly public). The scripting language is also turing-complete, but this is not as important as its scalability. Code execution in blockchains is decentralized but not distributed, so every node redundantly executes the same code and maintains the same public state. In Enigma, the computational work is efficiently distributed across the network. An interpreter breaks down the execution of a private contract, as is illustrated in Figure 1, resulting in improved run-time, while maintaining both privacy and verifiability. The off-chain network solves the following issues that blockchain technology alone cannot handle: 1. Storage . Blockchains are not general-purpose databases. Enigma has a decentralized off- chain distributed hash-table (or DHT ) that is accessible through the blockchain, which stores references to the data but not the data themselves. Private data should be encrypted on the client-side before storage and access-control protocols are programmed into the blockchain. Enigma provides simple APIs for these tasks in the scripting language.

Privacy-enforcing computation . Enigma’s network can execute code without leaking the raw data to any of the nodes, while ensuring correct execution. This is key in replacing cur- rent centralized solutions and trusted overlay networks that process sensitive business logic in a way that negates the benefits of a blockchain. The computational model is described in detail in section 5. 3. Heavy processing . Even when privacy is not a concern, the blockchain cannot scale to clearing many complex transactions. The same off-chain computational network is used to run heavy publicly verifiable computations that are broadcast through the blockchain. 4 Off-chain storage Off-chain nodes construct a distributed database. Each node has a distinct view of shares and en- crypted data so that the computation process is guaranteed to be privacy-preserving and fault tol- erant. It is also possible to store large public data (e.g., files) unencrypted and link them to the blockchain. Figure 2 illustrates the database view of a single node. shares encrypted data public data Figure 2: A node’s local view of the off-chain data. On a network level, the distributed storage is based on a modified Kademlia DHT protocol [11] with added persistence and secure point-to-point channels, simulated using a broadcast channel and public-key encryption. This protocol assists in distributing the shares in an efficient manner. When storing shares, the original Kademlia distance metric is modified to take into account the preferential probability of a node.
 
improving the amortized complexity, they are based on assumptions that restrict functionality in practice. Conversely, we describe a generic solution to this problem for any functionality in Section 5.2, which makes secure MPC feasible for arbitrarily large networks. Note that with secure addition and multiplication protocols, we can construct a circuit for any arith- metic function. For turing-completeness, we need to handle control flow as well. For conditional statements involving secret values, this means evaluating both branches and for dynamic loops we add randomness to the execution. Our general-purpose MPC interpreter is based on these core con- cepts and other optimizations presented throughout the paper. 5.1.2 Correctness (malicious adversaries) So far we have discussed the privacy property. Liveness , namely – that computations will terminate and the system will make progress, is also implied given an honest majority, since it is all that is needed for reconstruction of intermediate and output values. However, in the current framework there are no guarantees about the correctness of the output; party p i could send an invalid result throughout the computation process which may invalidate the output. While BGW [17] presented an information-theoretic solution to verifiable MPC, its practical complexity could be as bad as O ( n 8 ) , given a naive implementation [ ? ]. Therefore, our goal is to design an MPC framework that is secure against malicious adversaries but has the same complexity of the semi-honest setting ( O ( n 2 ) ). Later, we would further optimize this as well. Very recently, Baum et al. developed a publicly auditable secure MPC system that ensures correct- ness, even when all computing nodes are covertly malicious, or all but a single node are actively malicious [18]. Their state-of-the-art results are based on a variation of SPDZ (pronounced speedz ) [19] and depend on a public append-only bulletin board, which stores the trail of each computation. This allows any auditing party to check the output is correct by comparing it to the public ledger’s trail of proofs. Our system uses the blockchain as the bulletin board, thus our overall security is reduced to that of the hosting blockchain. SPDZ . A protocol secure against malicious adversaries (with dishonest majority), providing cor- rectness guarantees for MPC. In essence, the protocol is comprised of an expensive offline (pre- processing) step that uses somewhat homomorphic encryption (or SHE ) to generate shared ran- domness. Then, in the online stage, the computation is similar to the passive case and there is no expensive public-key cryptography involved. In the online stage, every share is represented by the additive share and its MAC, as follows: 〈 s 〉 p i = ([ s ] p i , [ γ ( s )] p i ) , s.t. γ ( s ) = αs, (6) where α is a fixed secret shared MAC key and 〈•〉 denotes the modified secret sharing scheme which is also additively homomorphic. 〈•〉 -sharing works without opening the shares of the global MAC key α , so it can be reused. As before, multiplication is more involved. Multiplication consumes {〈 a 〉 , 〈 b 〉 , 〈 c 〉} triplets, s.t. c = ab , that are generated in the pre-processing step (many such triplets are generated). Then, given two secrets s 1 and s 2 , that are shared using 〈•〉 -sharing, secret-sharing the product s = s 1 s 2 is achieved by consuming a triplet as follows – 〈 s 〉 = 〈 c 〉 +  〈 b 〉 + δ 〈 a 〉 + δ, (7)  = 〈 s 1 〉−〈 a 〉 , δ = 〈 s 2 〉−〈 b 〉 . (8) As mentioned, generating the triplets is an expensive process based on SHE. The full protocol in- cluding security proofs is found in [18]. Verification is achieved by solving – γ − αs = 0 , (9) 5 where s is the secret that, without loss of generality, can be the reconstructed result of any secure computation. Intuitively, this is just a comparison of the computation over the MAC against the computed result times the secret MAC key. The reason we are not performing actual comparison is so that α remains secret and can be reused. We can now see that 〈•〉 -sharing has similar properties to SSS, namely – it is additively homomor- phic and requires a re-sharing round for multiplication ( O ( n 2 ) communication complexity), but in addition – it ensures correctness against up to n − 1 active adversaries. The offline round is easily amortized over many computations and can be computed in parallel while other computations are running, so it does not significantly affect the overall efficiency. Publicly verifiable SPDZ . In the publicly verifiable case, MACs and commitments are stored on the blockchain, therefore making the scheme secure even if all n computing parties are malicious. We follow the representation of [18], which defines J • K -sharing, as – J s K = ( 〈 s 〉 , 〈 r 〉 , 〈 g s h r 〉 ) , (10) where s is the secret, r is a random value and c = g s h r is the Pedersen commitment, with g,h serving as generators. J • K -sharing preserves additive homomorphic properties, and with a slightly modified multiplication protocol we can re-use the same idea of generating triplets ( { J a K , J b K , J c K } ) offline. A key observation here is that the nodes only need to compute over 〈•〉 -shared values and not over the commitments. These are stored on the blockchain and could later be addressed by any public validator that has the output. Even if a single node has broken its commitment, it would be evident to the auditor. 5.2 Hierarchical secure MPC Information-theoretic results show that secure MPC protocols require each computing node to inter- act with all other nodes ( O ( n 2 ) communication complexity) and a constant number of rounds. In the case of a LSSS, this computational complexity applies to every multiplication operation, whereas addition operations can be computed in parallel, without intercommunication. As previously men- tioned, secure addition and multiplication protocols are sufficient to construct a general-purpose interpreter that securely evaluates any code [17]. Cohen et al [20] recently proposed a method of simulating an n -party secure protocol using a log- depth formula of constant-size MPC gates, as illustrated in Figure 3. We extend their result to LSSS and are able to reduce the communication-complexity of multiplication from quadratic to linear, at the cost of increased computation complexity, which is parallelized. Figure 4 illustrates how vanilla MPC is limited by the number of parties, while our implementation scales up to arbitrarily large networks. MPC 3 MPC 3 MPC 3 MPC 3 Figure 3: Hierarchical Formula Builder. 6 0 5 10 15 20 25 30 0 200 400 600 800 1000 1200 1400 number of parties computation time (ms) MPC ps−MPC unsecure Figure 4: Simulated performance comparison of our optimized secure MPC variant compared to classical MPC. 5.3 Network reduction To maximize the computational power of the network, we introduce a network reduction technique, where a random subset of the entire network is selected to perform a computation. The random pro- cess preferentially selects nodes based on load-balancing requirements and accumulated reputation, as is measured by their publicly validated actions. This ensures that the network is fully utilized at any given point. 5.4 Adaptable circuits Code evaluated in our system is guaranteed not to leak any information unless a dishonest majority colludes ( t ≥ n 2 ). This is true for the inputs, as well as any interim variables computed while the code is evaluated. An observant reader would notice that as a function is evaluated from inputs to outputs, the interim results generally become less descriptive and more aggregative. For simple functions or functions involving very few inputs, this may not hold true, but since these functions are fast to compute - no additional steps are needed. However, for computationally expensive functions, involving many lines of code and a large number of inputs, we can dynamically reduce the number computing nodes as we progress, instead of having a fixed n for the entire function evaluation process. Specifically, we design a feed-forward network (Figure 5) that propagates results from inputs to outputs. The original code is reorganized so that we process addition gates on the inputs first, followed by processing multiplication gates. The interim results are then secret-shared with N c nodes, and the process is repeated recursively. 5.5 Scripting As previously mentioned, end-to-end decentralized apps are developed using private contracts , which are further partitioned to on-chain and off-chain execution. Off-chain code returns results privately, while sending correctness proofs to the blockchain. For simplicity, the scripting language is similar in syntax to well-known programming languages. There are two major additions to the scripting language that require more detail. 5.6 Private data types Developers should use the private keyword to specify private objects. This automatically ensures that any computation involving those objects remains secure and private. When working with private objects, the data themselves are not locally available, but rather a reference of them. 7 Input #1 Input #2 Input #3 Input #4 Input #5 Output Addition layer 1 Mult. layer 1 Addition layer 2 Mult. layer 2 Output layer Figure 5: Feed forward flow of the secure code evaluation. 5.7 Data access There are three distinct decentralized databases living in the system, each accessible through a global singleton dictionary. Specifically – 1. Public ledger . The blockchain’s public ledger can be accessed and manipulated using L . For example, L [ k ] ← 1 would update key k for all nodes. Since the ledger is completely public and append-only, the entire history is stored as well and (read-only) accessible using L.get ( k,t ) . 2. DHT . Off-chain data are stored on the DHT and accessible in the same way the public ledger is. By default, data are encrypted locally before transmission and only the signing entity can request the data back. Otherwise, using DHT.set ( k,v,p ) , where k is the key, v is the value and p is a predicate, namely – p : X → { 0 , 1 } , sets v to be accessible through k if and only if p is satisfied. We supply several built-in predicates in the language such as limiting access to a list of public keys. If encryption is turned off, the default predicate is ∀ x p ( x ) = 1 , so the data are public but distributed off-chain. 3. MPC . Syntactically, using MPC is equivalent to DHT, but the underlying process differs. In particular, executing MPC.set ( k,v,p ) secret shares v . The shares are distributed to potential computing parties that store their shares in their local view. Now p can be used to specify who can reference the data for computation using v ref ← MPC [ k ] , without revealing v . By default, only the original dealer can ask for the raw data back by running v ← MPC.declassify ( k ) , which similar to the sharing process, collects shares from the various parties and reconstructs the secret value locally. In addition, any other entities belonging to the same shared identity can reference the data for computation. For details about shared identities see section 6.1. Note that for simplicity, we addressed all keys in L , DHT and MPC dictionaries as using a sin- gle namespace, whereas in practice finer granularity is available, so that they can be segmented to databases, tables, and finer hierarchies. 6 Blockchain interoperability In this section we show how Enigma interoperates with a blockchain. Specifically, we detail how complex identities are formed using digital signatures, which are automatically compatible with blockchains. We then continue to describe in detail the core protocols linking Enigma’s off-chain storage and computation to a blockchain. 8 6.1 Identity management A recent survey paper divided blockchain-inspired technologies into two: fully decentralized permission-less ledgers (e.g., Bitcoin, Ethereum) and semi-centralized permissioned ledgers (e.g., Ripple) [21]. In the paper, the author argues that there is an inherent trade-off between having a pseudo-anonymous system, where no one is trusted and all information must remain public, and having a somewhat centralized system with trusted nodes that can verify true underlying identities. With an off-chain technology linked to a blockchain, this trade-off can be avoided while the network remains fully decentralized. For this to work, we define an extended version of identities, one that captures shared identities across multiple entities and their semantic meaning. Formally, the pseudo-anonymous portion of a shared identity is a (2 n + 1) -tuple – SharedIdentity P = ( addr P ,pk ( p 1 ) sig ,pk ( p 2 ) sig , ··· ,pk ( p n ) sig ) (11) where n denotes the number of parties. It should be clear that for n = 1 we revert to the special pseudo-identity case. To complete our definition of shared identities, we incorporate the idea of meta-data. Meta-data encapsulates the underlying semantic meaning of an identity. Primarily, these include public access- control rules defined by the same predicates mentioned earlier, which the network uses to moderate access-control, along with any other public or private data that is relevant. For example, Alice may want to share with Bob her height, but not her weight. Alternatively, she may not even want to tell Bob her exact height, but will allow him to use her height in aggregate computations. In this case, Alice and Bob can establish a shared identity for this purpose. Alice invokes a private contract that shares her height using MPC [ ′ alice height ′ ] = alice height , which Bob can reference for computations, without accessing Alice’s height value directly. The default MPC predicate establishes that Alice’s pseudonym is the owner of the shared informa- tion and that Bob has restricted access to it. The predicate, shared identity’s list of addresses and a reference to the data are stored on the blockchain and collectively define the public meta-data, or in other words - information related to the identity that is not sensitive but should be used to publicly verify access rights. Any additional meta-data that is private, or in other words that only Alice, Bob and perhaps several others should have access to could be securely stored off-chain using the DHT. It should now be clear how our system solves the need for trusted nodes. As always, public transac- tions are validated through the blockchain. With shared identities and predicates governing access- control stored on the ledger, the blockchain can moderate access to any off-chain resources. For any- thing else involving private meta-data, the off-chain network can act as a trustless privacy-preserving verifier. 6.2 Link protocols We now discuss the core protocols linking the blockchain to off-chain resources. Specifically, we elaborate on how identities are formed and stored on the ledger; and how off-chain storage (DHT) and computation (MPC) requests are routed through the blockchain, conditional on satisfying pred- icates. 6.2.1 Access control Protocol 1 describes the process of creating a shared identity and Protocol 2 implements the publicly- verifiable contract for satisfying predicates. 6.2.2 Store and Load Storing and loading data for direct access via the DHT are shown in Protocol 3. For storing data, write permissions are examined with the given q store predicate. The storing party can provide a custom predicate for verifying who can read the data. This is the underlying process that is abstracted away using the DHT singleton object in the scripting language. 9 Algorithm 1 Generating a shared identity Input: P = { p i } N i =1 parties, A = { POLICY p i } N i =1 Output: Ledger L stores reference to the shared identity. addr P = 0 ACL = ∅ for p i ∈ P do ( pk ( p i ) sig ,sk ( p i ) sig ) ←G sig () addr P = addr P ⊕ pk ( p i ) sig ACL [ pk sig ] ← A [ p i ] end for m ← ( addr P ,ACL ) send signed tx(m) to the network procedure S TORE I DENTITY ( addr P ,ACL ) L [ addr P ] ← ACL end procedure Algorithm 2 Permissions check against the blockchain Input: pk ( p i ) sig the requesting party signature, addr P the shared identity’s address, q – a predicate verifying if p i has sufficient access rights. Output: s ∈{ 0 , 1 } . procedure C HECK P ERMISSION ( pk ( p i ) sig ,addr P ,q ) s ← 0 if L [ addr P ] 6 = ∅ then ACL = L [ addr P ] if q ( ACL,pk ( p i ) sig ) then s ← 1 end if end if return s end procedure 10 Algorithm 3 Storing or Loading Data Input: pk ( p i ) sig , addr P , x (data), q ( x ) read – a predicate for verifying future read access. Output: if successful, returns a x – the pointer to the data (predicate), or ∅ o.w. procedure S TORE ( pk ( p i ) sig ,addr P ,x,q ( x ) read ) if CheckPermission ( pk ( p i ) sig ,addr P ,q store ) = True then a x = H ( addr P ‖ x ) L [ a x ] ← q ( x ) read DHT [ a x ] ← x return a x end if return ∅ end procedure Input: pk ( p i ) sig , addr P , a x – the address of the data (predicate) Output: if successful, returns the data x , or ∅ o.w. procedure L OAD ( pk ( p i ) sig ,addr p ,a x ) q ( x ) read ← L [ a x ] if CheckPermission ( pk ( p i ) sig ,addr P ,q ( x ) read ) = True then return DHT [ a x ] end if return ∅ end procedure 6.2.3 Share and Compute Share and compute, illustrated in Protocol 4, are the MPC equivalent of store and load protocols, since they enable processing. Internally, they store and load shares from the DHT and allow working with references to the data while keeping the data secure. Algorithm 4 Secure computation and secret sharing protocols Input: pk ( p i ) sig , addr P , x (data), x ref – reference for computation, q ( x ) compute – predicate verifying computation rights. Output: if successful, returns pointer to x ref for future computation, or ∅ o.w. procedure S HARE ( pk p i sig ,addr P ,x,x ref ,q ( x ) compute ,n,t ) [ x ] p ← V SS ( n,t ) peers ← sample n peers for peer ∈ peers do send [ x ] ( peer ) p to peer on a secure channel end for return Store ( pk ( p i ) sig ,addr P ,x ref ,q ( x ) compute ) end procedure Input: pk ( p i ) sig , addr P , a x ref – reference data address, f – unsecure code to be rewritten as a secure protocol. Output: if successful, returns f ( x ) without revealing x , or ∅ o.w. procedure C OMPUTE ( pk p i sig ,addr P ,a x ref ,f ) x ref ← Load ( pk ( p i ) sig ,addr P ,a x ref ) if x ref 6 = ∅ then f s ← generate secure computation protocol from f return f s ( x ref ) end if return ∅ end procedure 11 7 Incentives Since Enigma is not a cryptocurrency or a blockchain, the incentive scheme is based on fees rather than mining rewards, where nodes are compensated for providing computational resources. Full nodes are required to provide a security deposit, making malicious behaviour punishable. 7.1 Security Deposits A possible attack on MPC protocols takes advantage of the lack of guaranteed fairness in the proto- col. Under certain conditions, a malicious party can learn the output and abort the protocol before other parties learn the output as well. While this attack, when carried out by a majority, cannot be prevented, it can be penalized. Using Bitcoin security deposits for punishing malicious nodes in MPC has been investigated by several scholars recently [22, 23]. We use a similar model, and extend it to penalize other malicious behaviors such as breaking correctness, which is validated by the SPDZ protocol (see section 5.1.2). To participate in the network, store data, perform computations and receive fees, every full-node must first submit a security deposit to a private contract. After each computation is completed, a private contract verifies correctness and fairness were maintained. If a node is found to lie about their outcome or aborts the computation prematurely, it loses the deposit which is split between the other honest nodes. The computation is continued without the malicious node (e.g., by setting its share of the data to 0). 7.2 Computation Fees Every request in the network for storage, data retrieval, or computation has a fixed price, similar to the concept of Gas in Ethereum. Unlike Ethereum where every computation is run by every node, in Enigma different nodes execute different parts of each computation and need to be compensated according to their contribution, which is measured in rounds. Recall that every function is reduced to a circuit of addition and multiplication gates, each of which takes one or more rounds. A node participating in a computation is paid the weighted sum of the number of rounds it contributed to and the operations it performed (addition, multiplication). Since the platform is turing-complete the exact cost of a request cannot always be pre-calculated. Therefore, once the computation is finalized, the cost of each request is deducted from an account balance each node maintains. A request will not go through unless the account balance is over a minimum threshold. 7.3 Storage Fees Fees for data storage are market based and time limited. The hosting contract is automatically renewed using the owner’s account balance. If the balance is too low, access to the data will be restricted and unless additional funds are deposited, the data will be deleted within a certain amount of time. 8 Applications 8.1 Data Marketplace Direct consumer to business marketplace for data. With guaranteed privacy, autonomous control and increased security, consumers will sell access to their data. For example, a pharmaceutical company looking for patients for clinical trials can scan genomic databases for candidates. The marketplace would eliminate tremendous amounts of friction, lower costs for customer acquisition and offer a new income stream for consumers. 8.2 Secure Backend Many companies today store large amounts of customer data. They use the data to provide person- alized services, match individual preferences, target ads and offers, etc. With Enigma, companies (...)https://www.enigma.co/enigma_full.pdf
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Everipedia, l'encyclopédie en ligne basée sur la blockchain qui veut détrôner Wikipédia - Blog du Modérateur

Everipedia, l'encyclopédie en ligne basée sur la blockchain qui veut détrôner Wikipédia - Blog du Modérateur | Sciences et technologies | Scoop.it

https://everipedia.org/

Nous vous proposons depuis quelques jours déjà un tour d’horizon des applications possibles de la blockchain. De l’agriculture à l’édition en passant par les jeux à base des chats et l’industrie musicale, elles sont très nombreuses. L’actualité est là pour le rappeler, avec une nouvelle qui fait beaucoup parler : le co-fondateur de Wikipedia, Larry Sanger, vient de rejoindre un concurrent, Everipedia. Et signe des temps, ce dernier se base sur la blockchain… Larry Sanger était critique depuis déjà quelques années avec Wikipedia, estimant que le projet avait été détourné de son but originel par certains utilisateurs trop zélés. Son choix de rejoindre Everipedia n’est donc pas étonnant et nous donne l’opportunité de nous pencher sur ce projet prometteur. Everipedia souhaite utiliser les smart contracts et la blockchain pour enregistrer les éditions d’articles, assurant ainsi un système anti-censure. Le but est de décentraliser l’information pour la rendre digne de confiance, là où elle est actuellement concentrée dans les mains de quelques acteurs. La start-up compte utiliser à la fois la blockchain EOS et le serveur décentralisé IPFS (Interplanetary File System) pour stocker les fichiers les plus lourds. Le projet compte également rémunérer les contributeurs avec sa propre cryptomonnaie, la bien nommée IQ, pour les inciter à participer à la création de contenu. Sur 100 millions d’IQ prévus, la moitié seront vendus lors d’une ICO, 30% distribués aux contributeurs dans les 100 (!) prochaines années et 20% correspondent à une levée de fonds déjà réalisée. Le but du projet est de s’affranchir des quelques « gardiens du temple » de Wikipedia, groupe de personnes restreint qui contrôlerait trop les évolutions du projet et feraient fuir le gros des contributeurs occasionnels. Everipedia se veut plus ouvert et plus démocratique. Il lui reste beaucoup de chemin pour rattraper son prestigieux concurrent, Everipedia revendiquant tout de même 6 millions d’articles et 2,3 millions de visiteurs uniques. Pas de version française à l’horizon à l’exception de quelques pages, l’anglais reste ultra majoritaire sur la plateforme. Et cette arrivée sur la blockchain est en cours de développement, le site actuel étant basé sur une technologie centralisée. À suivre donc, mais l’exemple d’Everipedia est symbolique des possibilités offertes par la blockchain, ce genre de projets risquant de se multiplier dans les prochains mois.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Ethereum Founder Unveils Roadmap For Next-Gen Blockchain #VitalikButerin #ConferenceInTaipei

Ethereum Founder Unveils Roadmap For Next-Gen Blockchain #VitalikButerin #ConferenceInTaipei | Sciences et technologies | Scoop.it

At the “Beyond Block” conference in Taipei, Ethereum’s founder, Vitalik Buterin, unveiled the plans for “Ethereum 2.0,” the next-generation version of Ethereum.

Ethereum 1.0 Problems

The Ethereum network was born as an idea for next-generation cryptocurrency network, which could do far more interesting things than just financial transactions. The Ethereum network has also been called the “programmable blockchain,” because you could develop “distributed applications” (dapps) on top of it. However, the network’s rapid growth in recent years has also revealed a few major issues within the network. According to Buterin, there are currently three major problems that need to be solved to push the Ethereum network to the next level: privacy, consensus safety, smart contract safety, and perhaps the biggest of them all: scalability.

Privacy

When Bitcoin first came out, everyone started calling its transactions “anonymous,” because you didn’t have your name directly tied to a transaction like you do with a credit card, especially if you were using a PC wallet to transact the money, rather than a centralized exchange. However, Bitcoin and many other cryptocurrencies’ core technology is something called the “blockchain,” a distributed ledger in which all transactions and wallet addresses are inscribed. What that means is that every single transaction and its corresponding address is recorded. The blockchain is also public for most cryptocurrencies, including Ethereum, which means anyone can look up all the transactions done from a given wallet address. That wallet address could then be tied to a real person’s identity if that person does any transaction that may reveal it. For instance, if the person in question transfers the money from that address to a centralized exchange’s address where his or her name is used, then all the previous transactions can be traced back to them. This is somewhat similar to using Tor for anonymity but then logging in to your real Facebook account or to an email address into which you’ve logged before with your real IP address. The Ethereum developers have already taken steps to address this by implementing the same zero-knowledge proof privacy technology used by Zcash in a recent upgrade. The technology should enable distributed apps (such as voting apps, for instance) to have mathematically provable anonymity. Buterin said that the privacy issue should be 75% solved already at the network-level, with the remaining 25% to be solved by apps that work on top of Ethereum which would need to actually implement those privacy features.

Consensus Safety

Consensus is currently achieved through a “proof of work” system, where the miners have to “mine” blocks on the network by using computational resources. The system is necessary to ensure that the network isn’t taken over by an attacker who could then control how the money is spent on the network. However, the big downside to this system is that it keeps using increasingly more power. A recent report said that Bitcoin mining consumes as much power in a year as 159 countries. Buterin admitted at the recent conference in Taipei that Ethereum isn’t much better. However, the plan is to eventually start switching Ethereum (slowly) to a “proof of stake” system, which wouldn’t require anywhere near as many computational resources. Smart Contract Safety

Ethereum has gone through its own share of cryptocurrency drama over the past couple of years. One of the most appealing things about Ethereum is that it’s also a smart contract platform. A smart contract is a self-executing contract where the terms between a buyer and a seller, as well as the enforcement of the clauses, are all written into code. It turns out that smart contracts can be about as buggy as any other piece of software. The only difference is one buggy smart contract can cost people hundreds of millions of dollars if something goes wrong - and it has. On one occasion, a hacker was able to temporarily steal $55 million from a distributed app running on top of Ethereum. The Ethereum developers were able to stop the attack by forking the Ethereum blockchain, thus creating what is now called Ethereum and the "old" Ethereum Classic. Buterin said that Ethereum will eventually introduce formal verification for smart contracts as well as a new Python-like “Viper” smart contract programming languages that’s supposed to enable the development of safer Ethereum applications.

Scalability

The biggest problem with Ethereum, as with the majority of cryptocurrencies, is scalability. If Ethereum is to be used universally by big banks and everyone in the world, it needs to be able to do many orders of magnitude more transactions per second than it can right now. Buterin said there are multiple scalability solutions being explored by different cryptocurrencies, including Bitcoin, but these involve some compromises. For instance, most cryptocurrencies, including Ethereum, currently sacrifice scalability to get safety. To increase scalability, some cryptocurrencies plan to sacrifice some safety by off-loading some transactions to other cryptocurrency networks where the transaction fees are cheaper.

Enter “Sharding”

Buterin explained that the next generation of Ethereum will use a new architecture called “sharding,” which will enable the network to process thousands of transactions per second -- all on the same chain, which means safety will not be sacrificed. Sharding will enable multiple “parallel universes” or domains to exist on the same network, but the transactions that occur in one of those universes won’t affect the speed of the network in other universes. There will also be protocols to link the different universes, but they will be more limited. Transferring data from one universe to another could, for instance, take two weeks, according to Buterin. These universes will share consensus, so if an attacker wants to take over one of the universe, would have to take over all of them, so the entire Ethereum network. For now, this new architecture still looks very much in the planning mode as not all of the details seem to have been figured out. The Ethereum team does plan to release a more limited version of this idea in the near future. Buterin also noted that sharding will create new types of addresses on the network, which will give Ethereum the opportunity to evolve by adopting new backwards incompatible protocols without disrupting the main blockchain.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

7 distributions Linux surprenantes Par  Dave Taylor / IDG (adaptation M.G.) - Le Monde Informatique

7 distributions Linux surprenantes Par  Dave Taylor / IDG (adaptation M.G.)  - Le Monde Informatique | Sciences et technologies | Scoop.it

En dehors des distributions Linux les plus connues, comme Ubuntu, Red Hat ou Fedora, il en existe d'autres moins renommées mais qui valent quand même le détour. Zoom sur les plus originales voire décalées.

Certaines distributions Linux peu connues peuvent sembler obscures au premier abord. Elles ont souvent été créées à des fins spécifiques et peuvent, par exemple, proposer une voie détournée pour amener certains publics à s’intéresser à Linux. Elles montrent surtout l’extraordinaire puissance de personnalisation de l’OS open source qui peut être adapté pour diverses catégories d’utilisateurs. 


1 - CAINE Linux

Computer Aided INvestigative Environment est une distribution italienne de Linux conçue pour les enquêtes criminelles et autres investigations judiciaires et policières. Elle inclut à cet effet une série d’outils spécifiques dont plusieurs logiciels de stéganographie (permettant de cacher des données dans une image ou une video), ainsi que la bibliothèque de programmes TheSleuthKit (TSK) qui permet d’analyser des images de disques et s’accompagne de l’interface Autopsy Forensic Browser. Pour les détectives en herbe, CAINE comprend aussi UFO, Ultimate Forensic Outflow, pour analyser en détail un ordinateur dont on a récupéré le contenu (historique du navigateur, récupération du mot de passe, analyse de malware, visualisation des logs…). En résumé, c’est la version de Linux qu’auraient utilisé Hercule Poirot et Sherlock Holmes s’ils avaient travaillé au 21e siècle...


2 - GNewSense

GNewSense Linux conviendra aux inconditionnels de l’open source que l’idée même de marques déposées et de copyrights logiciels exaspère. Cette distribution a été conçue pour supprimer tous les compléments propriétaires. Dans une certaine mesure, c’est une autre révolution dans l’histoire d’Unix : BSD 4.4 a été une réécriture de l’Unix AT&T qui s’est débarrassée de la propriété intellectuelle de l’OS. GNewSense cherche à faire la même chose avec Linux, avec un succès surprenant. Plus particulièrement, GNewSense s’est défait de tout ce qui n’est pas open source, depuis le noyau de l’OS jusqu’aux programmes individuels. Ainsi que l’indique l’équipe qui gère le système : « Recourir au logiciel libre est un choix politique et éthique affirmant vos droits à apprendre et à partager ce que vous avez appris avec d’autres ». Avec GNewSense, construit au-dessus de Debian 7, l’utilisateur ne rencontrera aucune problème de marque ni de licence.

3 - Red Star OS

Voilà une distribution qui ne s’adresse pas à tout le monde. En fait, elle est spécialement conçue pour ceux qui ont besoin d’un OS intégrant un filtrage du contenu parce qu’ils subissent une censure gouvernementale stricte. C’est l’OS officiel de la République populaire démocratique de Corée, autrement dit, la Corée du Nord. Il est basé sur Linux mais il permet au gouvernement de mettre en œuvre un système fermé disposant à la fois d’un accès réseau et de programmes. Red Star OS n’est bien sûr pas open source. Il peut mettre automatiquement un filigrane sur les fichiers médias, détruire des contenus jugés inappropriés et bloquer l’accès aux principaux segments de l’Internet mondial. Ceux qui seraient tentés par un essai doivent le faire très soigneusement. Aux tréfonds de l’OS se trouverait du code qui, selon les dires, renseignerait les systèmes centraux de la RPDC sur ce que vous faites, où vous vous trouvez, etc.

4 - Damn Small Linux

Certains systèmes d’exploitation ont eu une fâcheuse tendance à prendre leurs aises ces derniers temps. Si certains développeurs pensent qu’il suffit d’acheter un disque de plus grande capacité si l’OS dépasse les 10 Go, d’autres ont heureusement cherché à concevoir des solutions plus ramassées. C’est ce qui a inspiré les créateurs de Damn Small Linux alias DSL. C’est l’OS parfait pour les situations où l’on ne peut pas s’étaler puisqu’il loge dans 50 Mo. Il peut donc facilement être placé sur du stockage flash ou sur une solution de stockage au format carte de crédit, ce qui lui permettra de fonctionner sur d’anciens systèmes. Pour être honnête, il n’a pas non plus vocation à remplacer une installation Red Hat. Mais il peut parfaitement faire office de serveur SSH ou équivalent. De façon surprenante, même avec cette faible empreinte, l’équipe de développement a réussi à faire entrer dans l’OS des outils d’administration offrant une interface graphique.

5 - Yellow Dog Linux

Pour ceux qui se sont échinés à assembler un cluster de consoles Sony Playstation 4 pour créer leur propre système grid, Yellow Dog Linux est peut-être la solution. Bâtie au-dessus de Red Hat Linux/CentOS, cette distribution remonte à l’époque où Apple permettait à des tiers de concevoir des Mac sur des puces PowerPC. Cela n’a pas duré longtemps et les CPU PowerPC ont été supplantés par des puces plus rapides. Mais Yellow Dog a évolué et reste une distribution intéressante pour les assemblages complexes, celle que l’on recherche pour des systèmes multicoeurs haute performance. De surcroît, sa dernière version supporte le SDK Cell d’IBM pour l’accélération multicoeurs, Barcelona Superscalar et l’environnement de bureau E17 (Enlightenment). L’ensemble offre une interface et la puissance pour s’attaquer à des assemblages complexes.

6 - Tails Linux

La crainte de voir s’éroder la protection de la vie privée est à la base de la distribution Tails Linux. Construite au-dessus de Debian et supportée à la fois par Mozilla et par l’équipe anonyme du navigateur Tor, elle s’adresse à tous ceux qui tiennent à protéger sérieusement leur vie privée ou leur anonymat. C’est une distribution conçue pour s’auto-exécuter depuis une carte flash ou support externe équivalent. On peut donc la lancer sur n’importe quel PC sous Windows pour accéder à Internet puis se déconnecter sans laisser de trace de son activité. Toutes les connexions sont routées à travers le réseau Tor. Des outils de chiffrement sont disponibles pour chiffrer ses fichiers, ses e-mails et ses communications instantanées. Tails est un acronyme : The Amnesiac Incognito Live System.

7 - ZeroShell

Voici une distribution Linux spécialement créée pour les systèmes embarqués comme les routeurs, firewalls, serveurs proxys, net balancers, clients OpenVPN et serveurs DNS. Elle convient parfaitement au Raspberry Pi. Zeroshell n’inclut pas d’interface graphique. Pour y accéder et la configurer, il faudra le faire à partir d’un navigateur web fonctionnant sur une autre machine. Parmi les fonctions intégrées figurent un serveur Radius pour l’authentification sécurisée, le support de la gestion QoS, un serveur proxy HTTP, host-to-LAN et LAN-to-host, multi-zone DNS et le support pour l’authentification Kerberos 5. Elle est disponible sur un support externe auto-exécutable (Live CD) et peut se lancer depuis une carte flash. Elle existe aussi sous la forme d’une image de 512 Mo à télécharger et qui se lance automatiquement. Et aussi pour des usages plus atypiques :

8 - Ubuntu Satanic Edition

Les développeurs d’Ubuntu Satanic Edition se sont probablement beaucoup amusés en créant les icônes stylisées de la distribution et ses fonds d’écran. La dernière version de cette diabolique distribution a été baptisée avec humour 666.9. Partant apparemment du principe que ses utilisateurs sont aussi des amateurs de métal, elle inclut une série de morceaux de heavy metal versés dans le domaine public. Ce doit être assez efficace pour stresser ses collègues... Installée au-dessus d’Ubuntu Linux, cette version offrant divers fonds d’écran plus ou moins effrayants, n’est bien sûr pas destinée au PC portable du PDG (sauf s’il travaille pour Metallica), mais il s’agit d’une version tout à fait viable de l’OS idéale pour Halloween et pour les développeurs gothiques qui préfèrent travailler au sous-sol.

9 - Ubuntu Christian Edition

Si Satan n’est pas votre tasse de thé, il existe une distribution Linux qui propose des fonds d’écran d’inspiration chrétienne et inclut un système de contrôle parental. La dernière version d’Ubuntu Christian Edition (CE) est basée sur Ubuntu 12.04 et ses développeurs assurent qu’ils peuvent l’adapter à toutes les communautés.

10 - Hannah Montana Linux

Le thème de cette distribution Linux, qui s'adresse plutôt à un public américain, tourne autour des vieux shows TV de Disney et de la franchise Hannah Montana. Elle n’a pas été mise à jour depuis longtemps, mais elle existe toujours. Le développeur de HML n’est pas particulièrement un fan de Hannah Montana (qui s’affiche sur le fond d’écran). Il souhaitait seulement proposer un décor attrayant pour amener les jeunes utilisateurs vers Linux plutôt que vers Windows ou MacOS X. HML est bâtie au-dessus de Kubuntu KDE 4.2 et ce n’est rien de plus qu’un thème particulier au-dessus d’une version standard de l'OS. Mais l’idée de faire découvrir Linux aux enfants est intéressante.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Ring : plateforme de communication universelle et libre, respectant les libertés et la vie privée des utilisateurs #SavoirFaireLinux

Ring : plateforme de communication universelle et libre, respectant les libertés et la vie privée des utilisateurs #SavoirFaireLinux | Sciences et technologies | Scoop.it

Ring est un logiciel libre pour communiquer de multiples façons entre ses utilisateurs.

Pour quoi?

Un téléphone: un outil simple pour contacter, voir et échanger ;

Une station de conférence: joignez facilement vos appels entre eux pour créer des conversations à plusieurs participants ;

Un outil de partage multimédia: Ring fonctionne avec une multitude de caméras reconnues par votre système et affiche un fichier image ou vidéo en temps réel. Il sélectionne les sources et sorties audio à utiliser et s'appuie sur des codecs audio et vidéo de grande qualité ; Un messager: envoyez vos textes pendant vos appels ou en dehors (tant que votre correspondant est connecté) ;

Une composante élémentaire de votre projet IoT: ré-utiliser la technologie de communication universelle de Ring avec sa bibliothèque portable dans votre système. https://ring.cx/fr/decouvrir/a-propos


more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Why Testing Is Important for Distributed Software By Karl Hughes - The Linux Foundation

Why Testing Is Important for Distributed Software By Karl Hughes - The Linux Foundation | Sciences et technologies | Scoop.it

As developers, we often hear that tests are important. Automated testing minimizes the number of bugs released to production, helps prevent regression, improves code quality, supplements documentation, and makes code reviews easier. In short, tests save businesses money by increasing system uptime and keeping developers working on new features instead of fighting fires. While software testing has been around for about as long as software has, I would argue that testing is especially important (and unfortunately more challenging) in modern distributed software systems. “Distributed software” refers to any software that is not run entirely on one computer. Almost all web applications are distributed software as they rely on applications on other servers (eg: remote data stores, internal REST APIs, third-party APIs, content delivery networks, etc.), and most mobile and desktop applications are as well. Distributed software presents new challenges and requires a thoughtful approach to testing. This list includes just some of the reasons that testing is crucial for distributed software:


1. Third Party APIs Can Change Unexpectedly


We would like to think that every REST API we use will adhere to some form of versioning, but this doesn’t always happen. Sometimes APIs break when a maintainer fixes a bug, sometimes breaking changes are overlooked, and sometimes the API just isn’t mature or stable yet. With more companies releasing public APIs, we’re bound to see the number of accidentally breaking releases rise, and tests are a great way to prevent those breaking changes from affecting our applications.


2. Internal API Changes can Affect Your App in Unexpected Ways

Even more commonly, breaking API changes come from within our own organization. For the past few years, I’ve been working with startups where the business requirements change almost as fast as we can implement them, so our internal APIs are rarely stable and sometimes the documentation gets outdated. Slowing down, improving communication between team-members, and writing tests for our internal APIs has helped.

3. Remotely Distributed Open Source Packages are More Popular Than Ever

78% of companies are now running on some form of open source software. This has helped the speed and ease of developing software to increase exponentially, but blindly relying on open source packages has bitten plenty of developers as well (see the left-pad incident of 2016). Once again, we hope that open source packages use semantic versioning, but it’s impossible to guarantee this. Testing the boundaries between packages and our software is one way to help improve reliability.


4. Network Connections Aren’t Perfect

In many server-to-server cases, network connections are pretty reliable, but when you start serving up data to a browser or mobile client via an API, it gets much harder to guarantee a connection. In either case, you should have a plan for failure: Does your app break? Throw an error? Retry gracefully? Adding tests that simulate a bad network connection can be a huge help in minimizing poor user experiences or data loss.


5. Siloed Teams can Lead to Communication Gaps

One of the advantages to distributed systems is that a team can be assigned to each component. This allows each team to become an expert on just one part of the system, enabling the scaling of software organizations like we see at Amazon. The downside to these siloed teams is that communication becomes more difficult, but a good test suite, thorough documentation, and self-documenting APIs can help minimize these gaps.


How Do We Test Distributed Systems?

Distributed software has become more popular as the cost of cloud computing has gone down and network connections have become more reliable. While distributed systems offer unique advantages for scaling and cost savings, they introduce new challenges for testing. Borrowing from some of Martin Fowler’s ideas on testing microservices and my own experience building layered test plans, I’ll be presenting a strategy for testing distributed systems at this year’s API Strategy & Practice Conference. If you’re interested in learning more about the topic of testing distributed software, or you have questions, you can find me at the conference, or anytime on Twitter.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Fields of the future: A hands-free harvest By Dr Tim Fox #HarperAdamsUniversity

Fields of the future: A hands-free harvest By Dr Tim Fox #HarperAdamsUniversity | Sciences et technologies | Scoop.it
As the world faces the start of a new agricultural revolution, Dr Tim Fox explores not only the benefits it will bring but also the threats the fledgling transformation will face. A world first and the beginnings of another agricultural revolution occurred earlier this month, deep in the heart of the English countryside. A team of engineers, agriculturalists and technicians working at Harper Adams University, near Newport in Shropshire have just brought in a crop of Spring barley farmed from start to finish without setting a single human foot inside the field - a world first for automation and robotics that might be offering us a glimpse of the global future of rural farming. Farming is no stranger when it comes to engineering inventions leading to disruption of the sector, and once again it finds itself at the forefront of change. This time it is digital, a combination of artificial intelligence, digital connectivity, robots, drones and other forms of machine automation that are set to start a new agricultural revolution and create transformations across this global industry. There was a time when harvesting meant every able-bodied person in the village headed to the fields to gather in the fruits of another year of back breaking toil. Digging and preparing the land, sowing seeds, pulling out weeds and tending growth, until finally the hard work of cutting and bringing in the harvest – only to begin the whole labour intensive process again soon enough. But scientific and engineering advances, particularly in the agricultural and industrial revolutions of the past 200 or so years, changed all of that. With mechanisation, as well as modern methods and processes, the need for farm labour has reduced dramatically and today in the UK only around 1.5% of the nation’s workforce is engaged in agriculture. Some fear that a new agricultural revolution underpinned by digital technology will lead to vast unmanned industrialised farms run by robots and autonomous machines, further decreasing the sector’s human workforce. Others, however, believe that new jobs will emerge, for example that tractor drivers will become “fleet managers” controlling teams of smaller, agile, more environmentally friendly precision machines; and field agronomists “crop intervention managers” conducting operations remotely in all weathers from the convenience of their smartphones. There will, of course, also be new agriculture related jobs for robotics and mechatronics engineers, software and IT hardware developers, and support technicians, as well as equipment manufacturers and distributors. This revolution could be less about workforce reduction and more about workforce modernisation. With or without “fleet managers” however, a rural landscape populated by robots and autonomous vehicles charged with looking after one of the most fundamental needs of human life, food supply, does raise some challenging issues to be tackled. Not least of these is cybersecurity. Computer system hackers come in many forms, ranging from individuals who carry out attacks purely for personal pleasure, through to organised crime and state-sponsored teams of cyber warriors. As more and more digital technology is deployed, the ‘attack’ surface available for hackers to probe increases, along with the potential pathways to sensitive data and/or system hijacking and control. The recent spate of ransomware attacks on UK computer systems, including that of the NHS, is indicative of this growing threat. Farm wide integration of digital technology with machines such as drones, driverless tractors and robotic pest controllers provides a potential opportunity for hackers to take control of and, ultimately, prevent farms functioning. Cyberattacks on farms, with hackers taking over machines and potentially destroying crops or halting production and holding farmers and finely balanced ‘just-in-time’ food supply chains to ransom, could be a risk. Globally cyberattacks on critical infrastructure are on the increase. In 2015 the USA recorded nearly 300 incidents, compared to around 200 in 2012, and it has been estimated that in the second half of 2016 about 40% of industrial computers worldwide were attacked. Food production and distribution is critical infrastructure. And as agriculture adopts more and more digital technology, it will increasingly be open to attack. At the core of the cybersecurity challenge are three dynamics, the vulnerabilities inherent in the devices and sensors being connected together, the scale of the systems to be implemented, and the lack of awareness of the potential threat and counter strategies to follow. In the case of many devices and sensors, the level of cybersecurity protection is often of a very basic nature. This situation is frequently exacerbated by either hard coded passwords, or users leaving the default factory supplied passwords in place when the technology is introduced. Additionally, a lack of standards and regulation means that manufacturers are not incentivised to incorporate security features and when faced with market pressures for low cost products, create designs that are inherently vulnerable. The issue of scale plays out in the size of the attack surface created through the deployment of hundreds of vulnerable products across a system and the difficult challenge of ensuring that the software or firmware running them all is up to date with security upgrades and ‘patches’. The entire network is only as secure as its weakest link. The lack of awareness is apparent in the sector, even though there are already many digital technologies available for use in agriculture. Cyber-attacks often occur through firewalls, webcams, wireless access points, routers, printers and phones, all of which are commonly found on UK farms and offer a pathway for hackers to enter networks and cause disruption. From an engineering perspective, this is all solvable. However, to do so the challenge must be recognised by Government, the farming sector and the research community. The lack of standards and regulation requires governments to come together to urgently develop and implement these as cross-border instruments, providing support and enforcing penalties for non-compliance. More broadly the agricultural research community needs to explore the cyber security threats to farms of the future, work with the UK’s new National Cyber Security Centre to raise awareness of these and the appropriate security strategies to follow, as well as develop best practice cybersecurity guidance and advice for the sector. The basic security frameworks and building blocks have to be put in place – before, not after, the industry moves from a successful 2017 demo in Shropshire to a future global revolution. Automation and robotisation of agriculture clearly offers many potential societal benefits, including enhanced food security for a rapidly growing global population, reduced ecological degradation from industrial scale production, and improved environmental stewardship. And as was shown in Shropshire this year, the engineering capability and technical building blocks are largely available to begin moving forward. What is needed now is some government and sector leadership to guard against future cyber threats to this fledgling digital transformation. Dr Tim Fox is Chair, Food and Drink Engineering Committee, Institution of Mechanical Engineers.
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

SQIL 2017: 9e édition de la Semaine québécoise de l'informatique libre (SQIL) 16-24 sept 2017 #FACIL

SQIL 2017: 9e édition de la Semaine québécoise de l'informatique libre (SQIL) 16-24 sept 2017  #FACIL | Sciences et technologies | Scoop.it

Calendrier : https://2017.sqil.info/activites/

Journée internationale du logiciel libre 2017 à Québec Le samedi 16 septembre 2017 de 10h00 à 17h00 : http://agendadulibre.qc.ca/events/1672

La Semaine québécoise de l'informatique libre (SQIL) est un événement annuel coordonné par FACIL avec l'appui de nombreux organismes de la communauté québécoise du libre. La 9e édition de la SQIL aura lieu du 16 au 24 septermbre 2017.

Événement annuel coordonné par les bénévoles de FACIL, la Semaine québécoise de l'informatique libre (SQIL) consiste en neuf jours intenses d'activités autour du libre partout au Québec durant le mois de septembre. Tous les libres intéressent la SQIL : le logiciel, le matériel, le savoir ou la culture, bref tout ce qui peut se mettre sous une licence libre et participe à la construction, au développement et à la conservation des communs numériques. Historique Des SQIL ont été organisées en 2004, 2005, 2006, 2007, 2008, 2014, 2015, 2016. Pour plus d'infos sur les SQIL précédentes, voyez le Wiki de FACIL.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

The Byzantine Generals Problem By LESLIE LAMPORT, ROBERT SHOSTAK, and MARSHALL PEASE SRI International

The Byzantine Generals Problem By LESLIE LAMPORT, ROBERT SHOSTAK, and MARSHALL PEASE SRI International | Sciences et technologies | Scoop.it
The Byzantine Generals Problem LESLIE LAMPORT, ROBERT SHOSTAK, and MARSHALL PEASE SRI International
Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others.
 The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed. Categories and Subject Descriptors: C.2.4. [Computer-Communication Networks]: Distributed Systems--network operating systems; D.4.4 [Operating Systems]: Communications Management-- network communication; D.4.5 [Operating Systems]: Reliability--fault tolerance General Terms: Algorithms, Reliability Additional Key Words and Phrases: Interactive consistency /
1. INTRODUCTION
A reliable computer system must be able to cope with the failure of one or more of its components. A failed component may exhibit a type of behavior that is often overlooked--namely, sending conflicting information to different parts of the system. The problem of coping with this type of failure is expressed abstractly as the Byzantine Generals Problem. We devote the major part of the paper to a discussion of this abstract problem and conclude by indicating how our solutions can be used in implementing a reliable computer system. We imagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching agreement. The generals must have an algorithm to guarantee that A. All loyal generals decide upon the same plan of action. The loyal generals will all do what the algorithm says they should, but the traitors may do anything they wish. The algorithm must guarantee condition A regardless of what the traitors do. The loyal generals should not only reach agreement, but should agree upon a reasonable plan. We therefore also want to insure that B. A small number of traitors cannot cause the loyal generals to adopt a bad plan. Condition B is hard to formalize, since it requires saying precisely what a bad plan is, and we do not attempt to do so. Instead, we consider how the generals reach a decision. Each general observes the enemy and communicates his observations to the others. Let v(i) be the information communicated by the ith general. Each general uses some method for combining the values v (1) ..... v (n) into a single plan of action, where n is the number of generals. Condition A is achieved by having all generals use the same method for combining the information, and Condition B is achieved by using a robust method. For example, if the only decision to be made is whether to attack or retreat, then v(i) con be General i's opinion of which option is best, and the final decision can be based upon a majority vote among them. A small number of traitors can affect the decision only if the loyal generals were almost equally divided between the two possibilities, in which case neither decision could be called bad. While this approach may not be the only way to satisfy conditions A and B, it is the only one we know of. It assumes a method by which the generals communicate their values v (i) to one another. The obvious method is for the ith general to send v (i) by messenger to each other general. However, this does not work, because satisfying condition A requires that every loyal general obtain the same values v(1) ..... v(n), and a traitorous general may send different values to different generals. For condition A to be satisfied, the following must be true: 1. Every loyal general must obtain the same information v (1) .... , v (n). Condition 1 implies that a general cannot necessarily use a value of v(i) obtained directly from the ith general, since a traitorous ith general may send different values to different generals. This means that unless we are careful, in meeting condition 1 we might introduce the possibility that the generals use a value of v (i) different from the one sent by the ith general--even though the ith general is loyal. We must not allow this to happen if condition B is to be met. For example, we cannot permit a few traitors to cause the loya~ generals to base their decision upon the values "retreat",..., "retreat" if every loyal general sent the value "attack". We therefore have the following requirement for each i: (...)https://people.eecs.berkeley.edu/~luca/cs174/byzantine.pdf
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

"Introducing blockchains for healthcare," 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA)

"Introducing blockchains for healthcare," 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA) | Sciences et technologies | Scoop.it

Z. Alhadhrami, S. Alghfeli, M. Alghfeli, J. A. Abedlla and K. Shuaib, "Introducing blockchains for healthcare," 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, 2017, pp. 1-4. doi: 10.1109/ICECTA.2017.8252043

Abstract:

Blockchains as a technology emerged to facilitate money exchange transactions and eliminate the need for a trusted third party to notarize and verify such transactions as well as protect data security and privacy. New structures of Blockchains have been designed to accommodate the need for this technology in other fields such as e-health, tourism and energy. This paper is concerned with the use of Blockchains in managing and sharing electronic health and medical records to allow patients, hospitals, clinics, and other medical stakeholder to share data amongst themselves, and increase interoperability. The selection of the Blockchains used architecture depends on the entities participating in the constructed chain network. Although the use of Blockchains may reduce redundancy and provide caregivers with consistent records about their patients, it still comes with few challenges which could infringe patients' privacy, or potentially compromise the whole network of stakeholders. In this paper, we investigate different Blockchains structures, look at existing challenges and provide possible solutions. We focus on challenges that may expose patients' privacy and the resiliency of Blockchains to possible attacks.

Introduction

Within the next decade, health care services and applications are expected to generate trillions of dollars in revenue due to their integration as part of the Internet of Things (IoT) paradigm [1]. Most remarkably, smart healthcare has shown significant reduction to mortality rates and cost of healthcare, while improving quality, for instance, by reducing emergency room (ER) visits and hospital stays [2]. Being voluminous, health care records are best stored in the cloud to enable easy access and sharing of information among the different stakeholders. In addition, the security and privacy measures offered by the cloud increase the resiliency of data. However, the use of cloud storage does not allow interoperability between the different care providers. In addition, the integrity and authenticity of the data cannot be guaranteed. One possible technology to enhance integrity, authenticity, and consistency of stored and exchanged medical records is Blockchains. Blockchains can guarantee security of sensitive data by tracking access to confidential medical records and ensuring authorized access. Blockchains can serve as a distributed database that hardens medical reports against tampering [3]. As a distributed trusted mechanism, Blockchains addresses security issues associated with a deployed distributed database of patient records which could be managed by different advisories such as caregivers, hospitals, pharmacies, insurance companies, regulators and the patients themselves. Blockchains as a technology relies on public key cryptography and hashing mechanisms as a mean to keep track of historical transactions pertained to distributed patients' records while preserving confidentiality, integrity and availability. This will ensure that records are not lost or being wrongly modified, falsified or accessed by unauthorized users. In Blockchains, patients' records can only be appended to the database, but not removed. New information can be securely linked to a previous record using cryptographic hashing. Records are added to the blockchain based on a consensus among the majority of miners in the blockchain. Miners are a set of special nodes working together to validate new transactions added to a blockchain. To be able to add a record to a blockchain, miners have to compete to solve a difficult mathematical problem known as Proof of Work (POW) which takes 10 minutes on average. Hence, this will insure that no single party can modify or tamper with verified stored records. In addition, Blockchains can enable caregivers to provide encrypted alias targeted personalized health recommendations to patients without the need to reveal their identities. Transfer of patients' information among the members of the blockchain can be easily done in a secure manner without the need for any additional cumbersome verifications. This can be implemented through what is known as smart contracts. These are self-executing agreed upon conditions that reside on the blockchain and trusted by its all members. Using smart contracts allows for the exchange of information/data among the healthcare blockchain members in a transparent and conflict free manner without the need for any middleman. Although, Blockchains as a technology was originally designed for Bitcoin cryptocurrency [4], it has been found to be useful in a wide range of applications such as energy sector [5], smart contracts [6]–[7][8], personal data protection [3], healthcare [9], [10] and intelligent transportation systems [11]. In this paper, we present the different types of Blockchains and their backbones, explore the current existing implementations of Blockchains in the healthcare sector, and discuss security and privacy challenges and drawbacks that are associated with these models. Additionally, the paper delivers a set of proposed solutions to address current pitfalls. The rest of the paper is organized as follows. In section two, we explore related background information and the various types of Blockchains. In section 3, we look at security and privacy challenges and in section 4 we introduce proposed solutions to these challenges before we conclude the paper in section 5.

SECTION II.

Background and Types of Blockchains

Satoshi Nakamoto is considered the founder of Bitcoin, which is the first implementation of Blockchains. The concept of Bitcoin was first introduced in 2008 to enable two entities to perform transactions without requiring a trusted third party. The system relies on cryptographic proofs rather than trust [4]. While the first models of Blockchains were accessed via Internet users with no explicit permissions, corporations have since then effectively implemented additional instances in a permissioned context, limiting the task of determining blocks to a set of cautiously particular organizations with suitable permissions [12]. A. Permissionless Blockchains Public Blockchains are a form of peer-to-peer decentralized network that allows multiple nodes to participate in the network and perform transactions without having to rely on a trusted third party. Public Blockchains are said to be permissionless since they do not restrict access to certain nodes. Blockchains are used to validate, store, and maintain records about occurring transactions over the network. These transactions are stored in blocks on a public ledger. Each transaction exchanged between various nodes is verified and added to the blockchain by a set of special nodes called miners. The transactions are stored publicly in the blockchain and everyone including adversaries can see its content. Although records are stored on the public ledger, the actual exchange of data takes place off-chain. Miner nodes are required to solve a difficult mathematical problem known as a proof of work, the block is appended to the chain once consensus is achieved through that. In public Blockchains, a rewarding mechanism is necessary to incentivize users to join the network and mine blocks of transactions in exchange for the expended rewards such as CPU time and electricity [4]. One of the well-known public Blockchains currently is the Ethereum blockchain. The Ethereum blockchain uses Ether (Ethereum currency) to reward the first node that mines the block successfully [13]. Once a block is validated and added to the chain, it cannot be erased [14]. Modifying or changing a block requires recomputing the hash value of the desired block and all the consequent blocks which makes it very hard for attackers to forge [4]. Moreover, it is impossible for attackers to modify a blockchain as the blockchain is stored in several nodes in a distributed manner. Figure 1, illustrates how a new block is added to the blockchain. Recently several research projects were conducted on the use of public Blockchains in healthcare. The most recent one being MedRec [9]. The MedRec model utilizes the Ethereum platform to establish a decentralized medical record sharing system based on smart contracts. It enables the sharing of medical records between different medical stakeholder, patients, and any other party that processes medical or health records. Healthcare providers can add patients records at any given time, however patients are the ones that decide what information they want to grant other providers access to. MedRec suggests two mining models. The first is based on the use of Ether as a rewarding mechanism, while the second model suggests the use of aggregated and anonymized data as a reward to incentivize researchers, and the first node to mine the block is granted access to wanted data. Figure 1 Figure 1 (1) A block is created to represent the transaction. (2) Block is then forwarded to all participating nodes. (3) Miners compete to validate the block, first node to compute the hash is rewarded. (4) The validated block is appended to the chain. View All B. Permissioned Blockchains Although most of the characteristics of Blockchains make it a good choice for distributed applications such as smart healthcare systems, Bockchains cannot be used for storage and transmission of sensitive information such as healthcare records without taking proper precautions. The sole purpose of public Blockchains is not to provide confidentiality but rather to allow for a publicly accessible, verifiable and unforgeable storage of data [15]. Moreover, transmitting and storing a large amount of data on Blockchains raise scalability concerns for large-scale and widely used Blockchains applications such as healthcare systems [16]. As a result, a modified version of the original technology known as permissioned Blockchains are being introduced by the industry [15]–[16][17][18][19]. Permissioned Blockchains are believed to provide better confidentiality, privacy and scalability in addition to the basic functionalities supplied by the original Blockchains model. There are two varieties of permissioned Blockchains: private Blockchains and consortium Blockchains. There has been slight emphasis on the distinction between the private and consortium Blockchains [13], as they both run on a private network [20]–[21][22]. Both Blockchains are permissioned Blockchains, in which direct access to Blockchains data and submitting transactions are restricted to predefined set of entities [23]. 1) Private Blockchains Private Blockchains are blockchains where write permissions are kept centralized to one organization/entity whereas read permissions may be public or restricted to an arbitrary extent. Private Blockchains are based on a decentralized topology with an aim to make sure that chosen participants can view Blockchains activity, introduce control over which transactions are permitted, enable mining securely without proof of work and additional associated costs [24]. In private Blockchains high privacy is available because of the restriction over write and read permissions. The other advantage of private Blockchains over public Blockchains [19] is that a company running private Blockchains can easily change/modify the used rules and revert transactions. Moreover, the validators are known which restricts any addition of falsified blocks to the chain. Furthermore, in very well-connected nodes, faults can be fixed by manual intervention and chain participants can control the maximum block size which solves scalability issues. In addition, transactions are only verified by verified participants which leads to less processing power and thus cheaper transactions [24]. Figure 2 Figure 2 (1) A block generated by an authorized node. (2) The caregiver verifys the transaction. (3) The caregiver update the distributed ledger. View All In summary, private blockchains are nothing more than a specified distributed ledger, that record consensus of transaction between authorized parties into blocks. Any authorized node will create a transaction or a block. The transaction will be validated and distributed via the arbiter, without the need of cryptographic hashing. Figure 2 shows an example of how private Blockchains may operate in a healthcare setting. In private Blockchains, trust is centralized to one arbiter; the arbiter is responsible for adding new records and maintaining a central distributed ledger. However, read permissions to other nodes might be public or restricted to some extent. Whenever a transaction takes place, the involved nodes will inform the arbiter to observe the transaction and update the ledger. In private Blockchains trust exists between nodes. Hence, private Blockchains are very similar to a standard distributed database. 2) Consortium Blockchains Consortium Blockchains are Blockchains in which the consensus process is controlled by a preselected set of trusted nodes (i.e. entities) [13]. A block is added to the chain once consensus is achieved through validating the transaction by a group from the preselected set of nodes. For example, if there are five known/trusted nodes on the chain, for any block to be added or processed on the chain, a minimum of three entities must sign the transaction to validate the block to be appended to the chain. Figure 3, illustrates an example of how a block is added to the chain in a healthcare setting in three steps. In a consortium Blockchain, the right to read the blockchain may be public or made restricted only to participants. In addition, consortium Blockchains are considered partially decentralized unlike private Blockchains. A consortium blockchain model tends to appeal more to companies, due to the fact that it is decentralized unlike private Blockchains. For the healthcare sector, a consortium blockchain would be implemented by implementing a blockchain that would allow health organizations to share patient EMR through a distributed ledger [20] using a distributed database [21]. On a consortium blockchain “there wouldn't be a risk of a breach in Personal Health Information (PHI) security, as only individuals who traditionally had access to this information could access it” [22]. For example, the receptionist will be able to view only identification information of all patients, while the caregiver will be able to view the medical records of his patients only. There are some previous implementations of building Consortium Blockchains for healthcare such as MedChain [20], ModelChain [21], and Blocklnsure [22].

SECTION III.

Security and Privacy

Challenges Blockchains solve the problem of requiring trusted third parties to perform transactions which opens the door for security and privacy threats. Although Blockchains are used to establish smart contracts between healthcare providers to grant each other access to certain data or patients' records, there is still the potential problem of who is accessing the data and whether they are authorized to do so. Another major issue that could jeopardize PHI and EMR is due to the nature implementation of blockchains as it does not ensure the confidentiality of the data stored or transferred off chain. Figure 3 Figure 3 (1) A doctor/physician request to add a new block for an EHR. (2) The block is verified by the majority of the preselected healthcare entities. (3) Once the block is verified it is added to the chain. View All An additional security issue that may take place is what is known as a Sybil attack [25]. In this attack, a single attacker or group of attackers take over a network pretending as multiple nodes. Moreover, this attack might isolate the attacked node from the chain network preventing it from participating in any chain activities. Another problem that may arise is the inference of private data. For example, if the generated hash values of patients' private keys were stored on the block itself, and a certain doctor knows the hash value of a certain patient's private key, then he/she might be able to infer useful information such as how many times the patient visited the hospital and at what time. This could compromise the privacy of patients. Moreover, since Blockchains depend solely on a set of cryptographic algorithms to ensure integrity, this could potentially compromise the security and integrity of the whole network if quantum computers come into existence. This is the case since quantum computers use qubits instead of binary bits and as such it has greater processing speeds, which allow for an attacker to falsify blocks by re-computing the hash value of the blocks in a polynomial time.

SECTION IV.

Proposed Solutions

Guaranteeing that a system is 100% secure is not possible, however, building a robust system that help decrease exposure to possible security risks is what can one aim for. Therefore, in this section we propose to mitigate the risks of the aforementioned challenges that may phase a blockchain implementation in a healthcare context. For the unauthorized access conundrum, we suggest, as part of any proposed solution, the implementation of a blockchain as an access control list. Applying this solution will also eliminate the inference of patients' private data problem as well. Referring to [25], a Sybil attack cannot be banned but can be mitigated by forcing each miner node to compete in solving a difficult mathematical problem which is referred to as the proof of work before they can add a new block to the blockchain. Currently, the average time to solve such a problem takes around 10 minutes and since the adversary has to control more than 50% of the network to defeat this, detection of such an attack can always be possible. Finally, quantum computing when used has the potential of ending the utilization of Blockchains as a technology. Therefore, scientists should come up with new architecture designs for Blockchains that do not rely on current cryptographic algorithms (e.g. using post-quantum cryptographic algorithms instead).

SECTION V.

Conclusions

In this paper we discussed permissioned and permissionless Blockchains, their architecture, and how they could be implemented in healthcare. In addition, we discussed security and privacy challenges, including the Sybil attack, and how the use of Blockchains could come to an end because of quantum computers. Moreover, the paper suggested possible solutions for the aforementioned problems.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

M. E. Peck, "Blockchain world - Do you need a blockchain? This chart will tell you if the technology can solve your problem," in IEEE Spectrum, vol. 54, no. 10, pp. 38-60, October 2017. doi: 10.110...

M. E. Peck, "Blockchain world - Do you need a blockchain? This chart will tell you if the technology can solve your problem," in IEEE Spectrum, vol. 54, no. 10, pp. 38-60, October 2017. doi: 10.110... | Sciences et technologies | Scoop.it
According to a study released this july by Juniper Research, more than half the world's largest companies are now researching blockchain technologies with the goal of integrating them into their products. Projects are already under way that will disrupt the management of health care records, property titles, supply chains, and even our online identities. But before we remount the entire digital ecosystem on blockchain technology, it would be wise to take stock of what makes the approach unique and what costs are associated with it.■ Blockchain technology is, in essence, a novel way to manage data. As such, it competes with the data-management systems we already have. Relational databases, which orient information in updatable tables of columns and rows, are the technical foundation of many services we use today. Decades of market exposure and well-funded research by companies like Oracle Corp. have expanded the functionality and hardened the security of relational databases. However, they suffer from one major constraint: They put the task of storing and updating entries in the hands of one or a few entities, whom you have to trust won't mess with the data or get hacked. Blockchains, as an alternative, improve upon this architecture in one specific way-by removing the need for a trusted authority. With public blockchains like Bitcoin and Ethereum, a group of anonymous strangers (and their computers) can work together to store, curate, and secure a perpetually growing set of data without anyone having to trust anyone else. Because blockchains are replicated across a peer-to-peer network, the information they contain is very difficult to corrupt or extinguish. This feature alone is enough to justify using a blockchain if the intended service is the kind that attracts censors. A version of Facebook built on a public blockchain, for example, would be incapable of censoring posts before they appeared in users' feeds, a feature that Facebook reportedly had under development while the company was courting the Chinese government in 2016. However, removing the need for trust comes with limitations. Public blockchains are slower and less private than traditional databases, precisely because they have to coordinate the resources of multiple unaffiliated participants. To import data onto them, users often pay transaction fees in amounts that are constantly changing and therefore difficult to predict. And the long-term status of the software is unpredictable as well. Just as no one person or company manages the data on a public blockchain, no one entity updates the software. Rather, a whole community of developers contributes to the open-source code in a process that, in Bitcoin at least, lacks formal governance. Given the costs and uncertainties of public blockchains, they're not the answer to every problem. “If you don't mind putting someone in charge of a database…then there's no point using a blockchain, because [the blockchain] is just a more inefficient version of what you would otherwise do,” says Gideon Greenspan, the CEO of Coin Sciences, a company that builds technologies on top of both public and permissioned blockchains. Figure I want a blockchain! do you really need a blockchain? They can do some amazing things, but they are definitely not the solution to every problem. Asking yourself a handful of the questions on this chart can set you on the right path to an answer. You'll note that there are more reasons not to use a blockchain than there are reasons to do so. And if you do choose a blockchain, be ready for slower transaction speeds. View All With this one rule, you can mow down quite a few blockchain fantasies. Online voting, for example, has inspired many well-intentioned blockchain developers, but it probably does not stand to gain much from the technology. “I find myself debunking a blockchain voting effort about every few weeks,” says Josh Benaloh, the senior cryptographer at Microsoft Research. “It feels like a very good fit for voting, until you dig a couple millimeters below the surface.” Benaloh points out that tallying votes on a blockchain doesn't obviate the need for a central authority. Election officials will still take the role of creating ballots and authenticating voters. And if you trust them to do that, there's no reason why they shouldn't also record votes. The headaches caused by open blockchains-the price volatility, low throughput, poor privacy, and lack of governance-can be alleviated, in part, by tweaking the structure of the technology, specifically by opting for a variation called a permissioned ledger. In a permissioned ledger, you avoid having to worry about trusting people, and you still get to keep some of the benefits of blockchain technology. The software restricts who can amend the database to a set of known entities. This one alteration removes the economic component from a blockchain. In a public blockchain, miners (the parties adding new data to the blockchain) neither know nor trust one another. But they behave well because they are paid for their work. By contrast, in a permissioned blockchain, the people adding data follow the rules not because they are getting paid but because other people in the network, who know their identities, hold them accountable. Removing miners also improves the speed and data-storage capacity of a blockchain. In a public network, a new version of the blockchain is not considered final until it has spread and received the approval of multiple peers. That limits how big new blocks can be, because bigger blocks would take longer to get around. As of July, Bitcoin can handle a maximum of 7 transactions per second. Ethereum tops out at around 20 transactions per second. When blocks are added by fewer, known entities, they can hold more data without slowing things down or threatening the security of the blockchain. Greenspan of Coin Sciences claims that MultiChain, one of his company's permissioned blockchain products, is capable of processing 1,000 transactions per second. But even this pales in comparison with the peak throughput of credit card transactions handled by Visa-an amount The Washington Post reports as being10 times that number. As the name perhaps suggests, permissioned ledgers also enable more privacy than public blockchains. The software restricts who can access a permissioned blockchain, and therefore who can see it. It's not a perfect solution; you're still revealing your data to those within the network. You wouldn't, for example, want to run a permissioned blockchain with your competitors and use it to track information that gives away trade secrets. But permissioned blockchains may enable applications where data needs to be shielded only from the public at large. “If you are willing for the activity on the ledger to be visible to the participants but not to the outside world, then your privacy problem is solved,” says Greenspan. Finally, using a permissioned blockchain solves the problem of governance. Bitcoin is a perfect demonstration of the risks that come with building on top of an open-source blockchain project. For two years, the developers and miners in Bitcoin have waged a political battle over how to scale up the system. This summer, the sparring went so far that one faction split off to form its own version of Bitcoin. The fight demonstrated that it's impossible to say with any certainty what Bitcoin will look like in the next month, year, or decade-or even who will decide that. And the same goes for every public blockchain. With permissioned ledgers, you know who's in charge. The people who update the blockchain are the same people who update the code. How those updates are made depends on what governance structure the participants in the blockchain collectively agree to. Public blockchains are a tremendous improvement on traditional databases if the things you worry most about are censorship and universal access. Under those circumstances, it might just be worth it to build on a technology that sacrifices cost, speed, privacy, and predictability. And if that sacrifice isn't worth it, a more limited version of Satoshi Nakamoto's original blockchain may balance out your needs. But you should also consider the possibility that you don't need a blockchain at all. ■
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Holochain: scalable agent-centric distributed computing DRAFT(ALPHA 0) By Eric Harris-Braun, Nicolas Luck, Arthur Brock1 1Ceptr, LLCency/holochain

Holochain: scalable agent-centric distributed computing DRAFT(ALPHA 0) By Eric Harris-Braun, Nicolas Luck, Arthur Brock1 1Ceptr, LLCency/holochain | Sciences et technologies | Scoop.it

ABSTRACT : We present a scalable, agent-centric distributed computing platform. We use a formalism to characterize distributed systems, show how it applies to some existing distributed systems, and demonstrate the benefits of shifting from a data-centric to an agent-centric model. We present a detailed formal specification of the Holochain system, along with an analysis of its systemic integrity, capacity for evolution, total system computational complexity, implications for use-cases, and current implementation status. I.

INTRODUCTION Distributed computing platforms have achieved a new level of viability with the advent of two foundational cryptographic tools: secure hashing algorithms, and public-key encryption. These have provided solutions to key problems in distributed computing: verifiable, tamper-proof data for sharing state across nodes in the distributed system and confirmation of data provenance via digital signature algorithms. The former is achieved by hash-chains, where monotonic data-stores are ren- dered intrinsically tamper-proof (and thus confidently sharable across nodes) by including hashes of previous entries in subsequent entries. The latter is achieved by combining cryptographic encryption of hashes of data and using the public keys themselves as the addresses of agents, thus allowing other agents in the system to mathematically verify the data’s source. Though hash-chains help solve the problem of indepen- dently acting agents reliably sharing state, we see two very different approaches in their use which have deep systemic consequences. These approaches are demon- strated by two of today’s canonical distributed systems: 1. git1: In git, all nodes can update their hash-chains as they see fit. The degree of overlapping shared state of chain entries (known as commit objects) across all nodes is not managed by git but rather ex- plicitly by action of the agent making pull requests and doing merges. We call this approach agent- centric because of its focus on allowing nodes to share independently evolving data realities. 2. Bitcoin2: In Bitcoin (and blockchain in general), the “problem” is understood to be that of figuring out how to choose one block of transactions among the many variants being experienced by the mining nodes (as they collect transactions from clients in different orders), and committing that single vari- ant to the single globally shared chain. We call this 1 https://git-scm.com/about 2 https://bitcoin.org/bitcoin.pdf approach data-centric because of its focus on cre- ating a single shared data reality among all nodes. We claim that this fundamental original stance re- sults directly in the most significant limitation of the blockchain: scalability. This limitation is widely known 3 and many solutions have been offered 4. Holochain of- fers a way forward by directly addressing the root data- centric assumptions of the blockchain approach (...). https://github.com/metacurrency/holochain/blob/whitepaper/holochain.pdf

https://holochain.org/

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Using Open Source Code By Ibrahim Haddad - The Linux Foundation/ #BaselineComplianceProgram #OpenSourcePolicyExamplesAndTemplates

Using Open Source Code By  Ibrahim Haddad - The Linux Foundation/ #BaselineComplianceProgram #OpenSourcePolicyExamplesAndTemplates | Sciences et technologies | Scoop.it
https://github.com/todogroup/policies/blob/master/linuxfoundation/lf_compliance_approval.pdf
One of the most important responsibilities of an open source program office is ensuring that your organization meets its legal obligations when integrating open source code with proprietary and third-party source code in your commercial products. You need to establish guidelines on how developers can use open source code, and detailed processes to track where open source code is coming from, how it’s licensed, and where it ultimately ends up. This guide gets you started with a baseline compliance program for using, releasing, and distributing open source code.
Section 1
Why track and review code? Simply stated, if your company isn’t tracking how and where its developers use open source code, you’re at risk of non compliance with applicable open source licenses — this can be an expensive path to go through both in terms of legal fees and engineering time spent to correct the error. Ignoring your open source legal obligations also has repercussions for your company’s reputation in the open source community. An open source program office helps centralize policies and decision-making around open source consumption, distribution, and release, tracks code provenance and use, and ensures the organization doesn’t run afoul of its compliance obligations. Ideally, your open source program includes a complete compliance program, developed with the help of your legal counsel. In this guide, we’ll cover one important aspect of your compliance program: your policy and processes for using, releasing, and distributing open source code.
There are several benefits companies can experience from maintaining an open source compliance program: Gain a technical advantage. Since compliant software portfolios are easier to service, test, upgrade, and maintain. Identify crucial pieces of open source code. You’ll discover what code is in use across multiple products and parts of your organization, and/or are highly strategic and beneficial to your open source strategy. Demonstrate the costs and risks associated with using open source components. This is easier to see when code goes through multiple rounds of review. Build community trust. In the event of a compliance challenge, such a program can demonstrate an ongoing pattern of acting in good faith. Prepare for a possible acquisition, sale, or new product or service release. This is a less common ways benefit, but compliance assurance is a mandatory practice before the completion of such transactions. Build credibility in the supply chain. You can improve compliance across your software supply chain, dealing with OEMs and downstream vendors.
Section 2
Compliance roles and responsibilities Within your open source program you’ll want to create a designated open source compliance team that’s tasked with ensuring open source compliance. The core team, often called the auditing team or the Open Source Review Board (OSRB), consists of representatives from engineering and product teams, one or more legal counsel, and the Compliance Officer (who is typically the open source program manager). Other individuals across multiple departments also contribute on an ongoing basis to your open source compliance efforts: documentation, supply chain, corporate development, IT, localization and the Open Source Executive Committee (OSEC) which oversees the overall open source strategy. But unlike the core team, members of the extended team are only working on compliance on a part-time basis, based on tasks they receive from the OSRB. The OSRB is in charge of creating an open source compliance strategy and a set of processes that determine how a company will implement these rules on a daily basis. The strategy establishes what must be done to ensure compliance and offers a governing set of principles for how employees interact with open source software. It includes a formal process for the approval, acquisition, and use of open source, and a method for releasing software that contains open source or that’s licensed under an open source license.
Section 3
 A simple policy for using open source code The usage policy is an essential component of any compliance program. This set of rules is included in your open source strategy document (you have one, right?) and made available to everyone for easy reference. The usage policy is an essential component of any compliance program. This set of rules is included in your open source strategy document (you have one, right?) and made available to everyone for easy reference. The usage policy ensures that any software (proprietary, third-party, or open source) that makes its way into the product base has been audited, reviewed, and approved. It also ensures that your company has a plan to fulfill the license obligations resulting from using the various software components, before your products make it to customers. There is no need to make a lengthy or complicated document. A good open source usage policy includes six simple rules: Engineers must receive approval from the OSRB before integrating any open source code in a product. Software received from third parties must be audited to identify any open source code included, which ensures license obligations can be fulfilled before a product ships. All software must be audited and reviewed, including all proprietary software components. Products must fulfill open source licensing obligations prior to customer receipt. Approval for using a given open source component in one product is not approval for another deployment, even if the open source component is the same. All changed components must go through the approval process.
Section 4
5-stage
code review process Once you have a policy in place, you must plan and create a process that makes it easy to apply the policy. Your job is to grease the wheels for developer use of open source and contribution to open source projects.
The process begins by scanning the source code of the software package in question, then moves on to identifying and resolving any discovered issues, performing legal and architectural reviews, and making a decision regarding the usage approval. The diagram, below, illustrates a simplistic view of a compliance usage process. In reality, the process is much more iterative in nature. Keep in mind that these phases are for illustration purposes and may need to be modified depending on your company’s own needs and open source program configuration. Let’s walk through each stage in the process. Stage 1: Source Code Scan In the source code scanning phase, all the source code is scanned using a specialized software tools (there are many commercial vendors that offer such tools in addition to a couple open source alternatives). This phase typically kicks off when an engineer submits an online usage form. (See the sample usage form and rules for using it, below.) The form includes all the information about the open source component in question, and specifies the location of the source code in the source code repository system. The form submission automatically creates a compliance ticket in a system such as JIRA or Bugzilla and a source code scanning request will be sent to the designated auditing staff. Periodic full platform scans should also take place every few weeks to ensure that no open source software component has been integrated into the platform without a corresponding form. If any was found, then a JIRA ticket is automatically issued and assigned to the auditing staff. Some of the factors that can trigger a source code scan include: An incoming usage form, usually filled out by engineering staff. A periodically scheduled full platform scan. Such scans are very useful for uncovering open source code that snuck into your software platform without a usage form. Changes in a previously approved software component. In many cases, engineers start evaluating and testing with a certain version of an OSS component, and later adopt that component when a new version is available. Source code is received from a third-party software provider who may or may not have disclosed open source. Source code is downloaded from the web with an unknown author and/or license, which may or may not have incorporated open source code. A new proprietary software component enters the build system where engineering may or may not have borrowed open source code and used it in a proprietary software component. After the code is scanned, the scanning tool produces a report that provides information on: Known software components in use, also known as the software Bill of Materials (BoM) Licenses in effect, license texts, and summary of obligations License conflicts to be verified by legal File inventory Identified files Dependencies Code matches Files pending identification Source code matches pending identification Note on Downloaded Open Source Packages It is vital to archive open source packages downloaded from the web in their original form. These packages will be used in a later stage (prior to distribution) to verify and track any changes introduced to the source code by computing the difference between the original package and the modified package. If a third-party software provider uses open source, the product team integrating that code into the product must submit an open source usage form describing the open source to be used. If the third-party software provider only provides binaries, not source code, then the product team and/or the software supplier manager who manages the relationship with the third-party software provider must obtain a confirmation (for instance, a scan report) that there is no open source in the provided software. Stage 2: Identification and Resolution In the identification and resolution phase, the auditing team inspects and resolves each file or snippet flagged by the scanning tool. For example, the scanning tool’s report can flag issues such as conflicting and incompatible licenses. If there are no issues, then the compliance office will move the compliance ticket forward to the legal review phase. If there are issues to be resolved, then the compliance officer creates subtasks within the compliance tickets and assigns them to the appropriate engineers to be resolved. In some cases, a code rework is needed; in other cases it may simply be a matter of clarification. The sub-tasks should include a description of the issue, a proposed solution to be implemented by engineering, and a specific timeline for completion. The compliance officer may simply close the subtasks once all issues are resolved and pass the ticket along for legal review. Or they might first order a re-scan of the source code and generate a new scan report confirming that earlier issues do not exist anymore. Once they’re satisfied that all issues are resolved, the compliance officer forwards the compliance ticket to a representative from the legal department for review and approval. In preparation for legal review, you should attach all licensing information related to the open source software to the compliance ticket, such as COPYING, README, LICENSE files, etc. Stage 3: Legal Review In the legal review phase, the legal counsel (typically a member of the open source review board, or OSRB) reviews reports generated by the scanning tool, the license information of the software component, and any comments left in the compliance ticket by engineers and members of the auditing team. When a compliance ticket reaches the legal review phase, it already contains: A source code scan report and confirmation that all the issues identified in the scanning phase have been resolved. Copies of the license information attached to the ticket: typically, the compliance officer attaches the README, COPYING, and AUTHORS files available in the source code packages to the compliance ticket. Other than the license information, which for OSS components is usually available in a COPYING or a LICENSE file, you need to also capture copyright and attribution notices as well. This information will provide appropriate attributions in your product documentation. Feedback from the compliance officer regarding the compliance ticket (concerns, additional questions, etc.). Feedback from the engineering representative on the auditing team or from the engineer (package owner) who follows/maintains this package internally. The goal of this phase is to produce a legal opinion of compliance, and a decision on the incoming and outgoing license(s) for the software component in question. The incoming and outgoing licenses are in the plural form because in some cases, a software component can include source code available under different licenses. There are three possible outcomes at this stage: No issues If there are no issues with the licensing, the legal counsel would then decide on the incoming and outgoing licenses of the software component and forward the compliance ticket one step further in the process into the compliance architectural phase. The incoming license is the license under which you received the software package. The outgoing license is the license under which you are licensing the software package. In some cases, when the incoming license is a permissive license that allows relicensing (e.g., BSD), companies will relicense that software under their own proprietary license. A more complex example would be a software component that includes proprietary source code, source code licensed under License-A, source code that is available under License-B, and source code available under License-C. During legal review, the legal counsel will need to decide on the incoming and outgoing license(s): Incoming licenses= Proprietary License + License A + License B + License C Outgoing license(s) = ? Issues If a licensing issue is found, such as mixed source code with incompatible licenses, the legal counsel will flag these issues and reassign the compliance ticket in JIRA to engineering to rework the code. For example, legal review may uncover that closely held intellectual property has been combined with an open source code package. Legal counsel will flag this and re-assign the compliance ticket to engineering to remove the proprietary source code from the open source component. In the event that engineering insists on keeping the proprietary source code in the open source component, the open source executive committee (OSEC) will have to release the proprietary source code under an open source license. Unclear In some cases, if the licensing information is not clear or if it is not available, the legal counsel or engineering staff members contacts the project maintainer or the open source developer to clarify the ambiguities and to confirm under which license that specific software component is licensed. Stage 4: Architecture Review In the architecture review, the compliance officer and an engineering representative on the auditing team or open source review board perform an analysis of the interaction between the open source, proprietary, and third-party code. This is accomplished by examining an architectural diagram (see an example, below) that identifies: Open source components (used “as is” or modified) Proprietary components Components originating from third-party software providers Component dependencies Communication protocols Other open source packages that the specific software component interacts with or depends on, especially if it is governed by a different open source license. The result of the architecture review is an analysis of the licensing obligations that may extend from open source to proprietary or third-party software components (and across open source components as well). If the compliance officer discovers any issues, such as a proprietary software component linking to a GPL licensed component, the compliance officer forwards the compliance ticket to engineering for resolution. If there are no issues, then the compliance officer moves the ticket to the final stage in the approval process. Stage 5: Final Review The final review is usually a face-to-face meeting of the auditing team or open source review board (OSRB) during which the team approves or rejects the usage of the software component. The team bases its decision on the complete compliance record of the software component, which includes the following: A source code scan report generated by the scanning tool. The list of discovered issues, information on how they were resolved, and who verified that these issues were successfully resolved. Architectural diagrams and information on how this software component interacts with other software components. Legal opinion on compliance, and decision on incoming and outgoing licenses. Dynamic and static linkage analysis, if applicable in an embedded environment (C/C++). In most cases, if a software component reaches the final review, it will be approved unless a condition has presented itself (such as the software component is no longer in use). Once approved, the compliance officer will prepare the list of license obligations for the approved software component and pass it to appropriate departments for fulfillment. This can include: Updating the software inventory to reflect that the specific OSS software component version x is approved for usage in product y, version z. Issuing a ticket to the documentation team to update end user notices in the product documentation, to reflect that open source is being used in the product or service. Triggering the distribution process before the product ships. Steps accomplished after the OSRB approval The compliance officer tracks all open tickets and ensures their completion by the time the product ships or service launches. For a more detailed usage process and possible scenarios, see our ebook Open Source Compliance in the Enterprise.
Section 5
What to do after v1.0 Initial compliance, also called baseline compliance, happens when development starts, and continues until the release of the first version of the product. The compliance team identifies all open source code included in the software baseline, and drives all of the source components through the five-stage approval process outlined above. “It’s important to remember that open source compliance doesn’t stop with version 1.0.” Ibrahim Haddad – Vice President of R&D and Head of the Open Source Group at Samsung Research America You will also need to develop an incremental compliance process to check in on the source code once the product ships. This process starts when development begins on a new branch that includes additional features and/or bug fixes. Incremental compliance is the process by which compliance is maintained when product features are added to the baseline version 1.0. Incremental Compliance Incremental compliance requires a comparatively smaller effort as opposed to the efforts involved in establishing baseline compliance. But several challenges can arise. You must correctly identify the source code that changed between version 1.0 and version 1.1, and verify compliance on the delta between the releases: New software components may have been introduced. Existing software components may have been retired. Existing software components may have been upgraded to a newer version. The license on a software component may have changed between versions. Existing software components may have code changes involving bug fixes or changes to functionality and architecture. The obvious question is: How can we keep track of all of these changes? The answer is simple: a bill of material difference tool (BOM diff tool). Given the BOM for product v1.1 and the BOM for v1.0, we compute the delta and the output of the tool is the following: Names of new software components added in v1.1 Names of updated software components Names of retired software components With this information in hand, achieving incremental compliance becomes a relatively easy task: Enter new software components into the five-phase usage approval process. Compute a line-by-line diff of the source code in changed software components, and decide if you want to scan the source code again or rely on the previous scan. Update the software registry by removing the software components that are not used anymore. The diagram, below, provides an overview of the incremental compliance process. The BOM file for each product release is stored on the build server. The BOM diff tool takes two BOM files as input, each corresponding to a different product release, and computes the delta to produce a list of changes as previously discussed. At this point, the compliance officer will create new compliance tickets for all new software components in the release, update compliance tickets where source code has changed and possibly re-pass them through the process, and finally update the software registry to remove retired software components from the approved list. Example of Incremental compliance process Open source usage request form Completing the open source usage request form is an important step when developers bring open source software into your company, and should be taken very seriously. Developers fill out the online form requesting approval to use a given open source component. The form comprises several questions that will provide necessary information for the auditing team or open source review board, allowing it to approve or disapprove the usage of the proposed open source component. The table, below, highlights the information requested in an open source usage request form. Usually, these values are chosen from a pull-down menu to make the data entry efficient. There are several rules governing the OSRB usage form, for instance: The form applies only to the usage of open source in a specific product and in a specific context. It is not a general approval of the open source component for all use cases in all products. The form is the basis of audit activity and provides information the review team needs to verify if the implementation is consistent with the usage plan expressed in the form, and with the audit and architectural review results. The form must be updated and re-submitted whenever the usage plans for that specific open source component changes. The auditing team or review board must approve the form before engineering integrates the open source into the product build. The open source executive committee must approve the usage of any open source package where licensing terms require granting a patent license or patent non-assertion.
Section 6
Sample open source usage request form Section 7 Final words Open source compliance is an essential part of the software development process. If you use open source software in your product(s) and you do not have a solid compliance program, then you should consider this guide as a call to action. At its core, open source compliance consists of a set of actions that control the intake and distribution of open source used in commercial products. The result of compliance due diligence is an identification of all open source used in the product (components and snippets) and a plan to meet the license obligations. For a detailed guide to open source compliance download our free ebook, Open Source Compliance in the Enterprise by Ibrahim Haddad. Section 8 Architecture diagram template An architectural diagram, used in the architecture review phase of the open source review process, illustrates the interactions between the various software components in an example platform. Here is an example architectural diagram that shows: Module dependencies Proprietary components Open source components (modified versus as-is) Dynamic versus static linking Kernel space versus user space Shared header files Communication protocols Other open source components that the software component in question interacts or depends on, especially if it is governed by a different open source license
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

CA-211: DIVE : Le premier Serious Game pour l'éducation thérapeutique de l'enfant et de l'adolescent atteint de diabète de type 1 - ScienceDirect

CA-211: DIVE : Le premier Serious Game pour l'éducation thérapeutique de l'enfant et de l'adolescent atteint de diabète de type 1 - ScienceDirect | Sciences et technologies | Scoop.it
https://www.youtube.com/watch?v=ikpG2e8EKbM
CA-211: DIVE : Le premier Serious Game pour l'éducation thérapeutique de l'enfant et de l'adolescent atteint de diabète de type 1 Author links open overlay panelC.Godot1P.Maccini2N.Lepage3I.Jourdon3L.Gonzalez4A.Stupa4M.Polak4J.Beltrand1 1 Hôpital universitaire Necker enfants malades, Faculté de médecine Paris Descartes, Paris, France 2 Société solar games, Nice, France 3 Hopital universitaire Necker enfants malades, Paris, France 4 Hopital universitaire Necker enfants malades, Faculté de médecine Paris Descartes, Paris, France Available online 21 March 2016. https://doi.org/10.1016/S1262-3636(16)30343-3
Introduction
Les jeux vidéo sont peu utilisés dans l'éducation thérapeutique (ETP) des enfants qui ont un diabète de type 1 (DT1). Quelques essais ont montré qu'ils pouvaient être utiles et motivants pour le patient et leur utilisation comme support éducatif apparait tout à fait adaptée à l'ETP chez les enfants. Le serious game DIVE veut apporter au patient des connaissances théoriques et pratiques sur le DT1 par des vidéos éducatives et des questionnaires, lui permettre de se confronter virtuellement à certaines situations de vie grâce à des mises en situation, et d'exprimer son vécu grâce à un réseau social.
Patients et Méthodes
Étude pilote (6 semaines) pour mesurer l'intérêt, la jouabilité et l'acceptabilité du jeu comme support d'ETP. Parcours d'ETP en 8 chapitres apportant les connaissances de bases indispensables aux nouveaux patients (Recommandations ISPAD). Relevé du nombre chapitres validés et % de succès aux évaluations dans le jeu. Questionnaire de satisfaction.
Résultats
25 patients (F/G : 40/60 % – Age médian : 12,5 (9,5 – 18 ans) – Durée de diabété < 24 mois) Nombre médian de chapitres validés : 5 (1 à 8) chapitres – % médian de succès : 65 % (52 à 86). 21 questionnaires de satisfaction remplis. 75 % ont apprécié le graphisme et 66 % ont pris facilement le jeu en main. 80 % ont trouvé le jeu intéressant et 76 % ont mieux compris leur maladie. 86 % étaient satisfaits des vidéos éducatives et des thèmes abordés. 90 % ont trouvé le niveau de difficulté des quizz d'évaluation satisfaisant. Les jeux ont permis à 60 % de mieux interpréter leurs glycémies. 81 % ont trouvé utile l'existence d'un réseau social.
Conclusions
Ce pilote confirme l'intérêt et l'apport potentiel pour le patient pédiatrique d'un serious game pour l'ETP. Il a permis l'amélioration et la finalisation du jeu en fonction du retour utilisateur des patients qui pourront disposer bientôt de cet outil innovant.
https://www.mypharma-editions.com/diabete-sanofi-soutient-le-serious-game-dive?platform=hootsuite
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Launching an Open Source Project: A Free Guide - The Linux Foundation

Launching an Open Source Project: A Free Guide - The Linux Foundation | Sciences et technologies | Scoop.it
Launching an open source project and then rallying community support can be complicated, but the new guide to Starting an Open Source Project can help.
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

#RaspNode: Build Your Own Raspberry Pi Bitcoin Full Node [Bitcoin Core]

#RaspNode: Build Your Own Raspberry Pi Bitcoin Full Node [Bitcoin Core] | Sciences et technologies | Scoop.it
This tutorial is to install Bitcoin Core v0.13.0 (or possibly higher) on a Raspberry Pi 2 or 3. Options are given to install the GUI and wallet or not. We'll store the blockchain on an external USB flash drive (or hard drive), as that is more modular and better than storing it on a large microSD card with the OS. If you run into any Raspberry Pi problems while going through these steps, the Raspberry Pi Docs are a good source for help: http://www.raspberrypi.org/documentation/
FULL STEPS
Assembling the Raspberry Pi
Download and Install Raspbian on the microSD card
 Configuring Raspbian
Configuring the USB drive and setting to automount on boot
Enlarge Swap File
Configure networking on the Raspberry Pi
Downloading and installing Bitcoin Core and dependencies
Configure and run bitcoin
Configure home network to sync up with the bitcoin network
Confirm your node is reachable by the network (...)
more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

HackSpace magazine #1 is out now! - Raspberry Pi

HackSpace magazine #1 is out now! - Raspberry Pi | Sciences et technologies | Scoop.it

HackSpace magazine is here! Grab your copy of the new magazine for makers today, and try your hand at some new, exciting skills.

What is HackSpace magazine?

HackSpace magazine is the newest publication from the team behind The MagPi. Chock-full of amazing projects, tutorials, features, and maker interviews, HackSpace magazine brings together the makers of the world every month, with you — the community — providing the content.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Blockchain as an enabler for public mHealth solutions in South Africa - IEEE Conference Publication. 2017 IST-Africa Week Conference (IST-Africa), Windhoek, Namibia, 2017

Blockchain technology underpins a radical rethink of information privacy, confidentiality, security and integrity. As a decentralised ledger of transactions across a peer-to-peer network, the need for a central third party intermediate verification authority is disrupted. To unlock the potential for mHealth, the need for authentication and verified access to often sensitive data, specialised services and transfer of value need to be realised. This paper interrogates current processes and aims to make a case for Blockchain technology as an improved security model that has the potential to lower the cost of trust and an alternative to managing the burden of proof. This is particularly relevant for mHealth that, by its nature, is often a distributed endeavour involving the goal-orientated collaboration of a number of stakeholders.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Passer d’une idée à un prototype en utilisant les nouveaux outils de la Fabrication Numérique #ÉlectroniquelLibre #Arduino #FabLabs #IOT

Passer d’une idée à un prototype en utilisant les nouveaux outils de la Fabrication Numérique #ÉlectroniquelLibre #Arduino #FabLabs #IOT | Sciences et technologies | Scoop.it

Passer d’une idée à un prototype en utilisant les nouveaux outils de la Fabrication Numérique

Ce cours va vous permettre de vous approprier les outils et les techniques issus des FabLabs : Électronique, Arduino, Design, Internet des objets, modélisation 2D/3D, Imprimantes 3D… Chaque semaine1, une vidéo courte vous présentera un nouveau concept de la Fabrication Numérique. Cette vidéo sera accompagnée de sa transcription et éventuellement d'un cours complémentaire. L'objectif du cours est d'acquérir les compétences de base permettant ensuite aux apprenants de créer à peu près n'importe quoi !

Ce cours a initialement été rédigé via la participation de l'institut "Mines Télécom" de Rennes et publié sur la plateforme gouvernementale FUN (https://www.fun-mooc.fr/courses/MinesTelecom/04002S02/Trimestre_4_2014/about).

À qui s'adresse ce cours ?

Ce cours s’adresse aux curieux et aux passionnés du numérique souhaitant découvrir les technologies que l'on trouve dans les FabLabs.

Pré-requis

Une première expérience dans le développement informatique (C, Python, Java) est recommandée (il y a d'autres très bons MOOCs pour se former au développement).

Conditions d'utilisation du contenu

Licence Creative Commons BY (CC-BY) : Le contenu du cours permet toute exploitation de l’œuvre, y compris à des fins commerciales, ainsi que la création d’œuvres dérivées, dont la distribution est également autorisée sans restriction, à condition de l’attribuer à son auteur en citant son nom. Cette licence est recommandée pour la diffusion et l’utilisation maximale des œuvres.

Auteurs de ce cours

Ingénieur passionné par les FabLabs (@galouf), Baptiste a débuté sa carrière en 2009 à l'Institut Mines Télécom en tant qu'ingénieur de recherche. Il s'est intéressé à l’informatique embarquée dans les véhicules autonomes pour rejoindre ensuite une équipe spécialisée dans l'internet des objets. Baptiste est un passionné d'open-source et d'open-hardware ; il utilise aujourd'hui les technologies de la Fabrication Numérique pour ses projets professionnels et personnels. Il enseigne depuis 2012 ces thématiques au sein de l'École des Beaux-Arts de Rennes, à l'école d'ingénieur Télécom Bretagne mais aussi dans un des premiers FabLabs français, le LabFab.


Ingénieur, bidouilleur, animateur et inventeur, avec plus d'une vingtaine d'années d’expérience dans l'industrie en tant qu'ingénieur en électronique et informatique industrielle, Glenn apporte son savoir-faire et son enthousiasme avec l'envie de partager et bidouiller ensemble. Glenn est aussi porteur d'un projet de "open space" chez lui dans la région Midi-Pyrénées.


Bidouilleur curieux (@otanoshimini), Laurent est curieux ! Il a passé sa carrière dans des domaines variés (post-production, design, informatique, services généraux, web…). Il prône la transversalité et l'esprit d'exploration. Il a toujours aimé créer, bricoler, et il continue aujourd'hui avec tous les outils qu'il trouve !


Dresseur de robots (@Eskimon_fr), Simon est un passionné de robotique et de l'embarqué depuis ses études. Après plusieurs participations en coupe de robotique et plein de choses apprises, il a décidé de transmettre sa passion et son esprit de partage en écrivant des tutoriels, son plus gros ouvrage étant un tutoriel sur Arduino.


Évangéliste suprême (@JohnDaYoung), Hacker de toujours, John baigne dans la Fabrication Numérique depuis que celle-ci existe (peut être même avant). Pédagogue de génie, John a initié un nombre colossal de personnes à la soudure, au développement web, à Arduino et aux machines CNC sur tous les continents et dans toutes les conditions. Figure emblématique de la culture DIY, c'est tout naturellement que le LabFab a recruté John en tant que facilitateur. Poste qu'il occupe toujours aujourd'hui.

more...
No comment yet.
Scooped by Schallum Pierre
Scoop.it!

Software Freedom Day - Saturday 16 September 2017: Journée internationale du logiciel libre

Software Freedom Day - Saturday 16 September 2017: Journée internationale du logiciel libre | Sciences et technologies | Scoop.it

Software Freedom Day (SFD) is an annual worldwide celebration of Free Software. SFD is a public education effort with the aim of increasing awareness of Free Software and its virtues, and encouraging its use.

Software Freedom Day (SFD) is an annual worldwide celebration of Free Software. SFD is a public education effort with the aim of increasing awareness of Free Software and its virtues, and encouraging its use. Software Freedom Day was established in 2004 and was first observed on 28 August of that year. About 12 teams participated in the first Software Freedom Day. Since that time it has grown in popularity and every year we have more than 300 events organized by over 100 cities from the world.

more...
No comment yet.