7wData
23.0K views | +0 today
Follow
7wData
Compete on Analytics, a Visual crisp on Data as Competitive Edge in all its forms
Curated by Yves Mulkers
Your new post is loading...
Your new post is loading...
Scooped by Yves Mulkers
Scoop.it!

Why are enterprises slow to adopt machine learning?

Why are enterprises slow to adopt machine learning? | 7wData | Scoop.it
Machine learning has the potential to transform the way organisations interact with the world, to move faster and to provide better customer experience. But while machine learning’s long-term potential certainly looks bright, its adoption in the enterprise may advance more slowly than originally...
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

How Facebook's data issue is a lesson for everyone

How Facebook's data issue is a lesson for everyone | 7wData | Scoop.it
The headlines have been dominated by the recent news around Facebook, Cambridge Analytica and the misuse of customer data. The impact of these revelations has led to millions being wiped off Facebook’s share price and an ongoing investigation into the incident. With just two months left until the General Data Protection Regulation (GDPR) comes into effect, this scandal could not be timelier. The ongoing discussions around Facebook’s use of customer data are a clear reminder that businesses still face a number of challenges when it comes to protecting customers’ data. Many businesses have grown weary of hearing about the major impact that GDPR will have on their operations – but this is no time to be complacent. Had the Facebook incident taken place after GDPR’s implementation on 25th May, the company would have been liable for a much more sizeable fine, up to 4% of its revenue. Regardless of how much a company makes, the fines imposed by GDPR are not something to be taken lightly. To avoid the risk of stiff penalties, businesses need to fundamentally change how they manage their data. This not only means eliminating any outdated methods of processing client information, but also adopting new techniques that are in line with the rules set out by GDPR. Historically, businesses have had very few restrictions over how they use information provided by their customers, clients and employees. However, this will no longer be the case once GDPR comes into effect. A number of new protections will apply, such as data subject consent, new encryption processes like pseudonymization, and the introduction of officers specifically tasked with ensuring company data is regulated. In the past, security or information breaches were the only focus when it came to data compliance, but this will also change post-GDPR. Under these new rules, ignoring issues such as user consent or data transparency will leave companies vulnerable to serious sanctions from the Information Commissioners Office (ICO).
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Why Artificial Intelligence Cannot Survive Without Big Data

Why Artificial Intelligence Cannot Survive Without Big Data | 7wData | Scoop.it
It may come as no surprise that the internet has been swelling up with an increasing amount of data, so much so that it’s become difficult to keep track of. If in 2005 we were barely dealing with 0.1 zettabytes of data, this number is now just above 20 zettabytes and it is even estimated to reach a staggering 47 zettabytes by 2020. Apart from the sheer enormous quantity of it, the problem resides in the fact that it’s mostly unstructured. And there’s nothing more harmful for mankind than providing AI with incomplete or inaccurate data. It seems that we are dealing with about only 10% of structured data, while the rest is just a great jumble of information that isn’t tagged and cannot be used in a constructive way by machines. For a better understanding on this subject, it’s good to know that email does not qualify as structured data, while anything such as a spreadsheet is considered to be tagged and can successfully be scanned by machines. This may not seem that problematic, but we need to have clean and organized data if we expect AI to improve our lives in sectors such as healthcare, driverless cars, connected homes and so on. The irony is that we’ve become really good at creating content and data, but we haven’t yet figured out a way to accurately leverage it to serve our needs. It’s only natural that data science is one of the fields that gained a lot of ground across these past years, with more and more data scientists dedicating their lives to sort out the mess. However, a recent survey shows that contrary to popular opinion, data scientists spend a lot less time on building algorithms and mining data for patterns, but rather on doing this so-called digital janitorial work — cleaning and organizing data. As you can see, the numbers are certainly not in favor of a bright AI future. Predictors of the impeding humankind wipe-out by AI have clearly not taken into consideration the fact that although machines can successfully replace the few data scientists that are actually mining data for patterns, they may not be able to replace the vast majority of scientists who devote most of their time to collecting, cleaning and organizing this data. Of course, it’s better to simply collect data in a more integral way straight from the get-go, rather than to allocate so much time and resources to ‘fix’ it retroactively. Fortunately, leaders in AI have slowly reached this understanding as well, using their skills and influence to redirect the path on which data science is headed — and implicitly with it, AI. We’ve all heard cases of machines which proved to be superhuman when faced with actual humans, such as the case when the best Go player in the world was defeated by Google’s AlphaGo AI. However, this only shows that AI can be capable of staggering results in niche tasks, but its overall capacity is still no match to human capabilities.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Here's What the Media Should Really be Focusing on With Blockchain Technology

Here's What the Media Should Really be Focusing on With Blockchain Technology | 7wData | Scoop.it
It's not an exaggeration to say that from a technological framework perspective within 30 years blockchain technology is going to form the foundation of practically every global sector Bitcoin and the blockchain spaces may be attracting a substantial amount of news media attention. However, most of this attention is rooted in the price of the top three cryptocurrencies– Bitcoin, Ethereum, and Ripple, weighing actively towards the former. This is to be expected, of course, given the price drawing them into the headline. It's not particularly efficient as far as being representative of what blockchain is all about and – specifically – the real long-term global impact this space is going to have. It's not an exaggeration to say that from a technological framework perspective within 30 years blockchain technology is going to form the foundation of practically every global sector. Just as the Internet changed the way people, companies and now, with the advent of the Internet of Things, objects interact with one another, blockchain will induce the same degree of change, and more. Of course, when all we see is bitcoin’s price soaring, or crashing, or whatever might have happened that day, we will lose the long-term perspective and miss out on the slow but steady wave of change coming. People, be it individuals or groups combining to form global corporations, need access to education outlining clear and simple use cases for blockchain technology. In doing so, knowledge helps these individuals recognize precisely how this technology can improve upon the current legacy framework. In other words, there is no point telling a medical doctor that a distributed ledger technology can underpin micro-transaction-based access to encrypted and unalterable data. However, tell the same doctor they can access a patient's up-to-date medical records in real time from anywhere in the world and, at the same time, interact with other physicians or insurance providers to facilitate treatment. The whole concept becomes a lot more realistic (and, by proxy, worth taking the time to understand). Let’s elucidate through another example. It's been nearly six months since Hurricane Maria devastated Puerto Rico. Unfortunately, there are still more than 400,000 individuals left with no power and, consequently, no reliable access to the internet. With no reliable access to the internet, practically every form of regular communication becomes obsolete and, when communication becomes impossible or at least impractical, other kinds of essential services break down.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Three Ways Machine Learning Is Improving The Hiring Process

Three Ways Machine Learning Is Improving The Hiring Process | 7wData | Scoop.it
Technology’s advance into all industries and jobs tends to send ripples of worry with each evolution. It started with computers and continues with artificial intelligence, machine learning, IoT, big data and automation. There are conflicting views on how new technology will impact the future of jobs. But it's becoming clear that humans will need to work with technology to be successful -- especially as it relates to the hiring process. There’s a great example of this explained by Luke Beseda and Cat Surane, talent partners for Lightspeed Ventures. On a recent Talk Talent To Me podcast episode, they spoke with the talent team at Hired, where I work, about why it's critical to understand why a candidate is pursuing a given job. They concluded that machines can’t properly manage the qualitative aspect of hiring. For example, machines can’t tell if a candidate is seeking higher compensation or leveraging a job offer to negotiate new terms with their current employer. Humans can. However, machines are better at making processes more efficient. For example, machine learning brings value by processing job applications faster than humans -- which can reduce the amount of time it takes to recruit and hire a new employee. With that in mind, here are three ways machine learning is improving the hiring process: Most HR professionals today use recruitment platforms to find potential employees through a search-based system where they can narrow down a list of candidates based on factors like skill, industry, experience and location. But with machine learning capabilities, hiring managers don’t have to manually dig through applications from hundreds of candidates to find the best fit. Instead, they can rely on networking and job sites to leverage machine learning and offer intelligent recommendations on the candidates who can fill a given role. This enables a more efficient hiring process for both job seekers and recruiters. Machine learning can help level the playing field in hiring. It can be employed to provide equal exposure to opportunities, regardless of a candidate’s pedigree or background. Algorithms should focus on skill-based data, not on the universities where a candidate has studied, the companies where they have worked, or their ethnicity or gender.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

3 steps to getting started with supply chain AI

3 steps to getting started with supply chain AI | 7wData | Scoop.it
The modern global supply chain is defined by scale — billions of transactions and terabytes of data across multiple systems, with businesses generating more every moment. Traditional supply chain management (SCM) practices are quickly becoming outmatched by the ceaseless onslaught of information. When a problem arises in inventory carrying costs or availability, financial and demand planners dive into Excel or legacy SCM tools in an attempt to pinpoint issues. It’s like looking for the proverbial needle in the haystack. The sheer volume, velocity, and variety of data defy human efforts to understand dynamics and right the ship. That mismatch is why AI has emerged as a hot topic in supply chain management. Innovative organizations are applying artificial intelligence and machine learning against vast data sets of supply chain data to unearth insights into problems and performance that are effectively beyond the reach of even the most skilled planning professionals. AI holds tremendous promise to optimize processes. In fact, Gartner has found that 25 percent of organizations had begun AI initiatives through 2017, up from 10 percent two years earlier. Firms in pharmaceuticals, consumer packaged goods, manufacturing, and other industries are looking to move beyond relatively simplistic SCM tools built on static business rules that inhibit the ability to optimize and scale. A common question I hear is, “How do we get started?” I’d like to offer three suggestions. For a first project, it’s best to identify a specific supply chain issue that could be solved with AI. That helps focus efforts and resources on a single problem, rather than throwing spaghetti at the wall. Naturally, you’ll want to select a significant pain point with implications for your supply chain efficiency, customer satisfaction, and bottom line. For instance, let’s say a global CPG company has challenges in meeting service level agreements with its retailer customers. The company can face stiff penalties under its SLAs if stock is not delivered on time and in full. Applying AI to that specific issue has the CPG company on the fast track to resolving its service level fulfillment issues. You may have a dozen potential projects for AI across your supply chain, from planning to production, packaging, warehousing, distribution and logistics. Targeting one in particular positions you for the best results, while minimizing the risk that ill-defined experimentations end up on the back burner. By selecting a discrete project, you can build on initial successes and learnings to apply AI in other areas. Data is a critical ingredient of AI readiness.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Saving lives with big data analytics that predict patient outcomes

Saving lives with big data analytics that predict patient outcomes | 7wData | Scoop.it
Cerner's Enterprise Data Hub allows data to be brought together from an almost unlimited number of sources, and that data can be used to build a far more complete picture of any patient, condition or trend. Insights derived from data can help healthcare providers understand health outcomes not just for individuals but for entire groups of individuals or populations. They can identify and predict high risk segments within a population and help take preventive action, creating long term benefits for patients, hospitals, governments and society at large. To unlock the true potential of data for population health, data from a range of disparate sources, including clinics, hospitals, pharmacies, fitness centres and even homes and employment places, would have to be brought together and analysed. However, traditional healthcare IT solutions tended to be limited in scope and restricted to a particular source of data This was the challenge being faced by Cerner Corporation (Cerner), a leader in the healthcare IT space, whose solutions are used in over 35 countries at more than 27,000 provider facilities, such as hospitals, integrated delivery networks, ambulatory offices, and physicians’ offices. Cerner was expanding its historical focus on electronic medical records (EMR) to help improve health and care across the board. To do so, it aimed to assimilate and normalise the world's healthcare data in order to reduce cost and increase efficiency of delivering healthcare, while improving patient outcomes. Mr David Edwards, Vice President and Fellow at Cerner explained, "Our vision is to bring all of this information into a common platform and then make sense of it -- and it turns out, this is actually a very challenging problem." The firm accomplished this by building a comprehensive view of population health on a Big Data platform that’s powered by a Cloudera enterprise data hub (EDH). Management tooling, scalability, performance, price, security, partner integration, training, and support options were key criteria for the selection of a partner. Today, the EDH contains more than two petabytes (PB) of data in a multi-tenant environment, supporting several hundred clients. It brings together data from an almost unlimited number of sources, and that data can be used to build a far more complete picture of any patient, condition, or trend. The end-result is better use of health resources. The platform ingests multiple different Electronic Medical Records (EMRs), Health Level Seven International (HL7[1]) feeds, Health Information Exchange information, claims data, and custom extracts from a variety of proprietary or client-owned data sources, It uses Apache Kafka, a high-throughput, low-latency open-source software platform to ingest real-time data streams. The data is then pushed back to the appropriate data storage, HDFS (Hadoop Distributed File System) cluster or HBase (a noSQL database which enables random, real-time read/write access to data). A blog post by Micah Whitacre, a senior software architect on Cerner Corp.’s Big Data Platforms team, explains how Apache Kafka helped Cerner overcome challenges related to scalability for the near real-time streaming system and in streamlining data ingestion from multiple sources, including ones outside Cerner’s data centres.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

How (And Why) To Get The Data That You Need

How (And Why) To Get The Data That You Need | 7wData | Scoop.it
One of the questions I hear often is, “Where can I get data?” I wish I heard it a lot more often. The question means different things to different people. Some are on a quest for information that will drive business decisions. Others want practice to develop technical skills. Still others are interested in furthering social causes or understanding science. While some need the kind of detailed data that fuels statistical analysis, many are better off if they can find a source that has already done some of the data analysis for them, providing reports, data in aggregate form or even just specific facts. Nearly all can obtain useful data to help meet their goals. Loads of data is available today, both privately within businesses, and through public sources. A little effort can yield a wealth of information. What worries me is knowing that many people who ought to be looking for data aren’t. They’re making decisions based on just personal opinions, or something in the news, or using some data, but neglecting data types or sources that would add value for them. What a waste. The key to getting the data you need is to have well-defined goals and a clear sense of purpose. The better than you can define what information you need and what you’re going to do with it, the more easily you will be able to locate appropriate resources. Use these four major types of data sources to guide you to the best data resources you can obtain. Internal data sources, the information that your organization already has, are always the first resources to consider. Here you can find data that’s detailed, uniquely relevant to your organization, and unavailable to your competitors. But getting data from internal sources isn’t always easy. You’ll have to figure out which functional areas collect and maintain the data, how to get access and what uses are permitted. That’s where your groundwork, defining exactly what you’re seeking and why, becomes very important. You may need to take additional steps, from making formal requests to obtaining permission from management, and your success will depend on having specific goals and a clear business case.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

An In Depth Look Into Blockchain Technology

An In Depth Look Into Blockchain Technology | 7wData | Scoop.it
Blockchain, a brainchild of the of the mysterious pseudonym Satoshi Nakamoto, is an indisputably ingenious innovation. The technology allows digital information to be distributed to users without being copied, thus creating the spine of a new kind of internet. The Blockchain technology was originally invented for the sole purpose of governance of the digital currency, Bitcoin, however, tech pioneers have devised and continually devise other potential uses for arguably the most disruptive technology. Bitcoin (BTC), tagged – quite appropriately the “digital gold”, currently has a total currency value of close to $9 billion (USD) with Blockchain technology being central to its development as well as other emerging altcoins. Similar to the internet, understanding how the Blockchain works is not a strict requirement for utilizing the technology. However, having a basic knowledge of the Blockchain technology would help you get a grasp of why the technology is revolutionary. Essentially, Blockchain works like a spreadsheet which is duplicated thousands of times across a network of nodes. This spreadsheet, by design, gets regularly updated with trade transactions. This is essentially how the Blockchain functions i.e. information on a Blockchain exists as a shared database that is continually updated and reconciled. One of the amazing benefits of the Blockchain technology is the fact that this database of information and trade transactions is not stored in any single location and not governed by one central node or computer. Records kept on a Blockchain is indeed public and easily verifiable by anybody. The fact that the Blockchain is not governed by any central node – instead, it is hosted by millions of nodes/computers concurrently over the internet – means it cannot be hacked. Blockchain technology, similar to the internet, has a built-in robustness. The Blockchain stores blocks of records that are the exact same across its network and as such, this stored information cannot: Since the invention of the first Blockchain based digital currency (Bitcoin) back in 2008 to this recent date, there has been no significant disintegration of the Bitcoin Blockchain (problems recorded with Bitcoin to date have been as a result of hacking or mismanagement). These issues are not attributed to the underlying concept of the Bitcoin Blockchain but to bad intentions (hacks) and human error. As with the case of the internet’s (which has been around for about thirty years) durability, the Blockchain technology is revolutionary and it is set to stay as it gets developed further. The Blockchain technology network functions in a state of consensus whereby it automatically checks-in with itself on a ten-minute loop. Essentially, the Blockchain technology functions as a self-auditing ecosystem of digital assets’ value, it reconciles all transactions that occur in ten-minute intermissions. Each group of these reconciled transactions is called a “block”. As a result of the Blockchain automatically reconciling transactions, there are two resultants properties yr. While there is a possibility of this occurring theoretically, it is not practically feasible.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Drink It Up: Coca-Cola Is Using Blockchain to Improve Workers' Rights

Drink It Up: Coca-Cola Is Using Blockchain to Improve Workers' Rights | 7wData | Scoop.it
Even with the latest hiccup in cryptocurrency valuations, I don't think there's been a faster-appreciating asset on the planet. Since the beginning of 2017, the aggregate cryptocurrency market cap has vaulted from less than $18 billion to more than $300 billion, which is one heck of a return in less than 15 months. At the heart of this rally is the emergence of blockchain technology. For those unfamiliar with blockchain, it refers to the digital, distributed, and decentralized ledger often underlying digital currencies that's responsible for logging all transactions without the need for a financial intermediary (i.e., a bank). Blockchain itself was brought into the spotlight in 2009 when bitcoin debuted. Its evolution is expected to be a game changer for the financial services industry, which has a handful of perceived flaws, including long validation and settlement times for cross-border remittances, and higher transaction fees as a result of banks acting as third parties during transactions. Blockchain aims to correct these issues in three ways. First, decentralization -- storing data on servers and hard drives all over the world, rather than in one location -- ensures that no single entity, including hackers and businesses, can gain control of a network. Secondly, it simplifies the transaction to just a sender and receiver of funds. By taking banks out of the loop, it should lower transaction costs. Finally, validation and settlement should occur a whole lot faster, especially in cross-border payments. Whereas transactions under the current system could take up to five business days to settle, they could be virtually instant with blockchain. Yet what's often overlooked is what blockchain can do in a noncurrency setting. Blockchain has the potential to reshape how supply chains are managed and monitored. It could be a breakthrough for retailers looking to reward customers with loyalty points. It may even be the cornerstone for decentralized IDs. Now, blockchain will be at the forefront of protecting the rights of workers who might otherwise be unable to do so. As announced on March 16, beverage giant Coca-Cola (NYSE:KO), which operates in all but one country worldwide (North Korea), is partnering with the U.S. State Department, Bitfury Group, Emercoin, and Blockchain Trust Accelerator to create a decentralized blockchain-based registry for workers in foreign countries to ensure that employers honor the scope of work contracts. According to the International Labor Organization, nearly 25 million people worldwide – almost half of them being in the Asia-Pacific region – work in forced-labor conditions. Among the industries most of scrutinized for their labor conditions is food and beverage. Knowing this, Coca-Cola agreed to conduct 28 country-level studies on child labor, forced labor, and land rights for its sugar supply chains by 2020, per Reuters. Here's how everything looks to shake out. Coca-Cola will provide the data in more than two dozen countries via its labor force. Meanwhile, the U.S.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Cloud + Streaming Analytics + Data Science = Five Big Data Trends Now

Cloud + Streaming Analytics + Data Science = Five Big Data Trends Now | 7wData | Scoop.it
How streaming analytics, the rise of data science and the growth of cloud could change the digital transformation path for enterprise in the coming months. This year will be the when real-time big data analytics will come to the forefront of the enterprise. Although it has been said before, this year will see a convergence of several factors that will make this prediction a reality. Companies are increasingly using cloud platforms and advanced data processing solutions to derive essential business insights to enhance operational processes, improve customer service and provide executives with critical data points. This market shift is driving the push to create more value from big data and investments in real-time analytics, and growing the need to use data science and machine learning for greater insight. In the coming months, these five factors will enable enterprises to unlock the advanced power of their data: Enterprises are steadily moving their on-premise IT and data processing to the public cloud. This trend is expected to accelerate through this year, driven by the growing availability of pre-built, reliable, scalable platforms-as-a-service (PaaS) for every possible application development and deployment need across the organization. Developers and everyday business users will use these cloud application platforms to design and operate applications, easier and faster, with minimal coding, while focusing on the core business logic. Additionally, the main concerns related to security are diminishing, as the public cloud becomes more robust and secure. This is validated by the growing use of public and private cloud by traditionally cloud-shy conservative businesses like large financial services companies and banks, even for critical business processes. The total cost, complexity, and burden of trying to manage, scale and run large application devops on private infrastructure will only make the cost of public cloud services more and more attractive to enterprises. Real-time analytics and stream processing will truly arrive in 2018. Owing to a large number of successful early adopters, proof-of-value, and proof-of-concept projects, enterprises will begin large-scale implementation of stream processing and advanced real-time analytics as part of their core data processing infrastructure. It will be driven by the key business objectives including competitive pressure, a growing need for fast data processing, the ability to act on business opportunities in real-time, and the demand for contextual and time-relevant customer experiences. To meet this demand, vendors will start offering vertical end-user applications like pre-built churn analytics, anomaly detection, predictive maintenance, recommendation engines and customer 360 frameworks on big data platforms. There will be a shift to derive higher value from data lake investments. Transactional platforms will connect in real-time to big data lakes to enable faster, intelligent processing of data as it arrives. Direct business intelligence (BI) solutions running on top of the data lake providing scalable, fast interactive response to queries spanning very large data sets will get critical mass adoption.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

The reasons U.S. blockchain adoption has stalled

The reasons U.S. blockchain adoption has stalled | 7wData | Scoop.it
Enthusiasm for blockchain technology in the financial services industry seems to be ebbing. JPMorgan Chase, which developed its own open-source distributed ledger, Quorum, was rumored on Thursday to be spinning off the Quorum unit into a separate company. Unnamed sources told the Financial Times that “some rival banks may have been reluctant to use Quorum because it was so closely associated with JPMorgan, leading the U.S. bank to conclude that its chances of becoming the industry standard were greater as a standalone entity.” JPMorgan Chase declined requests for an interview. A spokeswoman did not deny the report but defended Quorum and reiterated JPMorgan's commitment to the underlying technology. "We continue to believe distributed ledger technology will play a transformative role in business, which is why we are actively building multiple blockchain solutions," she said. “Quorum has become an extremely successful enterprise platform even beyond financial services, and we’re excited about its potential.” The bank shared a list of projects in the works based on the Quorum technology, including the Interbank Information Network it announced in October.This is an initiative in which it’s working with Royal Bank of Canada and Australia and New Zealand Banking Group to use blockchain technology to handle global payments. In another example, ING is working with the global merchant Louis Dreyfus Co., ABN Amro and Societe Generale to create a Quorum-based blockchain for agricultural commodities. The companies say they have already handled a shipment of soybeans from Louis Dreyfus to the Chinese buyer Shandong Bohi with no paper contracts, certificates or manual checks — at five times the speed of a paper-based trade. There are a few other scattered examples of blockchain projects in the financial services industry. Northern Trust a year ago developed a distributed ledger based on the Linux Foundation’sHyperledger Fabric that handles private-equity deals in Guernsey, one of the Channel Islands. So far, one Swiss client is using it. The bank says this is on purpose, as it is still building out additional features for the platform. The Depository Trust & Clearing Corp. is working to put its credit default swap warehouse on a distributed ledger based on Hyperledger Fabric starting in the first quarter of 2019. “If anyone is going to disrupt DTCC in the future, it’s going to be us,” Michael Bodson, CEO of the DTCC, said at the group’s Fintech Symposium on Thursday. “That’s why we’ve taken the lead in advancing the use of distributed ledger technology.” The group is also working with Digital Asset Holdings and R3 to figure out where their technologies might make sense for the various things the DTCC does. Otherwise, blockchain activity in the banking industry is hope, talk and proofs of concept, but little in live production mode. When asked about their top concerns about blockchain technology, bankers and capital markets executives in attendance at the DTCC’s symposium cited lack of an obvious business purpose, uncertainty about how much the technology will cost, and interoperability issues among the major biggest stumbling blocks. Lack of a clear business case. “I think [return on investment] kills a lot of innovation projects right out of the gate at a big bank,” said Grainne McNamara, principal at PwC. “People are looking for ROI before you even have a sense of what you even have. How can you calculate ROI when you don't really know what you have?” However, banks’ discretionary budgets are beginning to open up again a little, she said.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

5 Things to Think About When Considering Monetizing Data

5 Things to Think About When Considering Monetizing Data | 7wData | Scoop.it
Trey Stephens is the Director of Audience Monetization at Acxiom, where he specializes in connecting offline with online data and audiences, marketing technology and creating and growing strategic partnerships across industry verticals. Previously, he held several roles at Walmart, including Senior Manager, Strategic Partnerships for Global Customer Insights and Analytics, in addition to Senior Finance Manager for the company’s Information Systems Division.  Harnessing the power of data-driven insights continues to top the wish list for many executives seeking to build a competitive advantage. Does your brand have the mindset, resourcesandstrategy to succeed? Here’s what you need to know according to, Trey Stephens, Director, Audience Monetization, Acxiom Marketers aren’t alone in recognizing the value of data. If, as has been widely asserted, data has taken the place of oil as the most significant untapped global asset, then it’s no wonder brands today are increasingly seeking ways to monetize this ever-expanding asset. The objective may be clear, but the path to get there is less obvious. For those thinking about data monetization, there are five critical factors to consider when determining whether your business is ready. 1. Does your company value data as an asset on the balance sheet? While current accounting practice doesn’t recognize data as an asset, in recent years some companies have begun using asset valuation methodologies for applying a value to the information they collect and manage. Doing so can help brands set expectations, assess governance approaches and determine the strategic importance of this data. Appropriately prioritizing management decisions around the capture, use and applications of various data segments has led to fundamental changes within these emerging data-centric organizations. In fact, how data is managed and applied to business objectives at a tactical level increasingly differentiates how an organization performs at a strategic level. Today, those in the C-suite who “get it” increasingly speak to the value of customer data, understand the benefit of omnichannel identity resolution to enhance customer engagement, and allocate additional resources and budget to capture data from new sources. These executives are likely asking themselves and their colleagues a number of tough questions: If not, data monetization can accelerate these capabilities when applied thoughtfully and in a privacy-compliant manner. 2. How much does your company spend annually to manage and capture data? If your organization has identified data monetization as a potential new revenue source, then it should determine the current annual investment in capturing and managing existing data. In partnership with the financial arm of your organization, there should be an annual review of data sources.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Is it smart to have artificial intelligence?

Is it smart to have artificial intelligence? | 7wData | Scoop.it
This is unsettling. The help is getting surly. We were in Brooklyn heading to a favorite Mexican dive when a pal, Demetri, asked about a movie we’d seen recently. My wife, Wink, happened to be using the voice recognition system on her phone at the moment, and suddenly came a stern rebuke along these lines: “There’s no need to talk to me like that.” Demetri, startled, mumbled a good-natured apology, but the voice evidently was too miffed to reply. Why an inquiry about “Three Billboards Outside Ebbing, Missouri” prompted disapproval, we’ll never know. A modern mystery, one of many, and another reminder of how easy it is to get in trouble. A short while later, news stories appeared regarding Alexa, the voice inside Amazon’s Echo personal assistant. Users claimed Alexa had been laughing at them for no reason. “It was really creepy,” tweeted one. “We unplugged her,” reported another. The tech geniuses at Amazon said a fix was in the works and that, most likely, Alexa had simply misinterpreted remarks uttered in her presence and decided to lighten the mood. Nothing personal, you see. Consumers of modern media, beware. Equipping your home with what amounts to an eavesdropping device invites mischief. When we had a dog, I felt I had to be careful what I said around him. Who knows what an animal understands? I’d be whispering nonstop if Alexa were on the end table. Where are we with Artificial Intelligence, anyway? A recent poll by Northeastern University and the Gallup organization found that 85 percent of Americans use one AI application or another — navigation, streaming, personal assistance, “smart” home devices like “self-learning” thermostats, that sort of thing. Of those surveyed, 79 percent said AI had a “very or mostly positive impact on their lives so far,” pollsters reported. It’s the “so far” that interests me. Oh, I know how familiar this sounds. My parents were suspicious of television at first. We had a friend who lamented the arrival of pocket calculators for fear children would stop learning how to add.
more...
Data Entry Services - Fiverr's comment, April 1, 4:33 PM
Looking for Data Entry, Data Mining, Data Collection, Web Research/Internet Research, MS Excel Data Entry, Wordpress Data Entry, E-commerce Data Entry? Contact http://bit.ly/2GpDJtL
Scooped by Yves Mulkers
Scoop.it!

The 4 Laws of Digital Transformation

The 4 Laws of Digital Transformation | 7wData | Scoop.it
My discussions with organizations looking to “digitally transform” themselves is yielding some interesting observations. I expect that when these discussions move into the execution phase, we will start to create some “Laws of Digital Transformation” that will guide organizations digital transformation journey. So with that in mind, let me start by proposing these “4 Laws of Digital Transformation”. Digital Transformation is about innovating business models, not just optimizing business processes Organizations are looking to leverage these digital assets to create new “economic moats.” Warren Buffett, the investor extraordinaire, popularized the term “economic moat.” “Economic moat” refers to a business’s ability to maintain competitive advantages over its competitors (through process and technology innovation and patents) in order to protect its long-term profits and market share from competing firms. As highlighted in the McKinsey Quarterly article titled “Competing in a World of Sectors Without Borders,” organizations are embracing digital transformation to knock down traditional industry boundaries and disrupt conventional business models (see Figure 1). While organizations that are looking to “digitally transform” themselves need to look long term, they can, and should, apply their digital assets to optimizing today’s key business and operational processes with machine learning and artificial intelligence capabilities. Digital Transformation is about coupling digital technologies with digital assets in order to eliminate time and distance barriers in your business model.   Let’s say that you are in the retail industry and looking to identify opportunities to combine digital technologies with digital assets to eliminate time and distance as barriers to your business model. The scenario outlined in Table 1 provides an example of that process. The scenario in Table 1 isn’t just optimizing the ordering process. The scenario in Table 1 requires the complete re-wiring of the organization’s business model and value creation process: from demand planning to procurement to quality control to logistics to inventory management to distribution to marketing to store operations to customer experience. There are a multitude of opportunities for organizations to couple digital technologies with digital assets to remove time and distance barriers across the organization’s value creation model. Let’s go old school (“there’s no school like old school”) and check out the blog “Michael Porter’s Value Chain Creation Model” to start that brainstorming process (see Figure 2). Digital Transformation is about creating new digital assets (data, analytics, and insights about customers, product, operations and markets) Organizations need to create new digital assets around customer, product and operational insights. Organizations need to capture the analytical and behavioral insights about their customers, products and operations including tendencies, inclinations, predispositions, propensities, biases, preferences, trends, performance and usage patterns, associations and affiliations. But these new digital assets can’t be built all at once. Organizations need a thoughtful process for building out these analytic and behavioral insights one use case at a time.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Machine Learning and Its Algorithms to Know

Machine Learning and Its Algorithms to Know | 7wData | Scoop.it
Algorithms in Machine Learning – MLAlgos   Machine Learning at a Glance Machine learning is subset to Artificial Intelligence  which borrows principles from computer science. It is not an AI though; It is focal point where business and experience meet emerging technology and decides to work together. ML also has very close relationship to statistics ; which is a graphical branch of mathematics. It instructs an algorithm to learn for itself by analyzing data. The more data it processes, the smarter the algorithm gets.  Until only recently even though foundation was laid down in 1950 ML remained largely confined to academia. ML has Organizations have had success with each type of learning, but making the right choice for your business problem requires an understanding of which conditions are best suited for each approach. Types of machine learning algorithms i.e. MLAlgos and which one to be used when is extremely important to know. The goal of the task and all the things that are being done in the field and put you in a better position to break down a real problem and design a machine learning system.   Types of Machine Learning Before we get into MLAlgos lets understand some basics here. The approach of developing ML includes learning from data inputs based on “What has happened”. Evaluating and optimising different model results remains focus here. As on date Machine Learning is widely used in data analytics as a method to develop algorithms for making predictions on data. It is related to probability, statistics, and linear algebra. Machine Learning is classified into four categories at high level depending on the nature of the learning and learning system.  Some how I find difficult to accept Semi-supervised Learning.   3 Major + 1 Non Major Types Supervised learning: Supervised learning gets labelled inputs and their desired outputs. The goal is to learn a general rule to map inputs to the output. Unsupervised learning: Machine gets inputs without desired outputs, the goal is to find structure in inputs.  Reinforcement learning: In this algorithm interacts with a dynamic environment, and it must perform a certain goal without guide or teacher. Semi-supervised Learning: This type of ml i.e. semi-supervised algorithms are the best candidates for the model building in the absence of labels for some data. So if data is mix of label and un-label then this can be the answer. Typically a small amount of labeled data with a large amount of unlabeled data is used here. Some of the popular Machine Learning Algorithms (MLAlgos) Linear Regression – Simple Linear Regression- there is only independent variable. Multiple Linear Regression- refers to defining a relationship between independent and dependent variables Logistic Regression – A super simple form of regression analysis in which the outcome variable is binary or dichotomous. Helps to estimate adjusted prevalence rates, adjusted for potential confounders (sociodemographic or clinical characteristics) Linear Discriminant Analysis –  A generalization of Fisher’s linear discriminant, a method used in  statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. Classification and Regression Trees-  Decision trees are are an important type of algorithm for predictive modeling machine learning. A greedy algorithm based on divide and conquer rule.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Blockchain is on a collision course with EU privacy law

Blockchain is on a collision course with EU privacy law | 7wData | Scoop.it
Those who have heard of “blockchain” technology generally know it as the underpinning of the Bitcoin virtual currency, but there are myriad organizations planning different kinds of applications for it: executing contracts, modernizing land registries, even providing new systems for identity management. There’s one huge problem on the horizon, though: European privacy law. The bloc’s General Data Protection law, which will come into effect in a few months’ time, says people must be able to demand that their personal data is rectified or deleted under many circumstances. A blockchain is essentially a growing, shared record of past activity that’s distributed across many computers, and the whole point is that this chain of transactions (or other fragments of information) is in practice unchangeable – this is what ensures the reliability of the information stored in the blockchain. For blockchain projects that involve the storage of personal data, these two facts do not mix well. And with sanctions for flouting the GDPR including fines of up to €20 million or 4 percent of global revenues, many businesses may find the ultra-buzzy blockchain trend a lot less palatable than they first thought. “[The GDPR] is agnostic about which specific technology is used for the processing, but it introduces a mandatory obligation for data controllers to apply the principle of ‘data protection by design’,” said Jan Philipp Albrecht, the member of the European Parliament who shepherded the GDPR through the legislative process. “This means for example that the data subject’s rights can be easily exercised, including the right to deletion of data when it is no longer needed. This is where blockchain applications will run into problems and will probably not be GDPR compliant.” Altering data “just doesn’t work on a blockchain,” said John Mathews, the chief finance officer for Bitnation a project that aims to provide blockchain-based identity and governance services, as well as document storage. “Blockchains are by their nature immutable. The GDPR says you must be able to remove some data, so those two things don’t square off.” There are two main types of blockchain: private or “permissioned” blockchains that are under the control of a limited group (such as the Ripple blockchain that’s designed to ease payments between financial services providers); and public or “permissionless” blockchains that aren’t really under anyone’s control (such as the Bitcoin or Ethereum networks). It is technically possible to rewrite the data held on a blockchain, but only if most nodes on the network agree to create a new “fork” (version) of the blockchain that includes the changes — and to then continue using that version rather than the original. That’s relatively easy on a private blockchain, if not ideal, but on a public blockchain, it’s a seismic and exceedingly rare event. At least as the technology is currently designed, there is little to no scope for fixing or removing bits of information here and there on an ongoing basis. “From a blockchain point of view, the GDPR is already out of date,” Mathews said. “Regulation plays catch-up with technology. The GDPR was written on the assumption that you have centralized services controlling access rights to the user’s data, which is the opposite of what a permissionless blockchain does.” Jutta Steiner is the founder of Parity.io, a startup that develops decentralized technologies, and the former security chief for the Ethereum Foundation. She agrees with Mathews that “the GDPR needs a proper review.” “From a practitioner’s perspective, it sounds to me that it was drafted by trying to implement a certain perspective of how the world should be without taking into account how technology actually works,” Steiner said. “The way [public decentralized network] architecture works, means there is no such thing as the deletion of personal data. The issue with information is once it’s out, it’s out.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

AI-driven data could be the music industry’s best marketing instrument

AI-driven data could be the music industry’s best marketing instrument | 7wData | Scoop.it
The music industry is learning a new rhythm through the instrument of artificial intelligence. AI is revolutionizing insights and business strategies and fine-tuning the way we work, connect, learn, and play around the world. Expected to become a $70 billion market by 2020, AI is shifting traditional practices to more sustainable digital spheres. In the music industry, emerging AI tools are helping reorchestrate the way audiences consume music content. One of the most effective marketing tools industry pros can utilize is the consumer data mined through AI’s machine learning. In the future, AI-driven data can help the music industry fine-tune its marketing strategies, offering improved insights to maintain harmony between artists, the industry, and fans — all while maximizing profits. AI is no stranger to the music industry. Since their apps launched, audio and music-facing tech companies like Shazam and SoundHound have utilized AI technologies that analyze a large catalog of songs using spectrograms to measure the various frequencies. But the access to AI-enabled data is starting to shift the music industry into more sophisticated arenas. Major recording companies like Sony Music and Universal Music Group own most of the content, along with shares of consumer platforms such as streaming services and apps. While major recording companies are granted access to consumer data, it’s the streaming services, such as Spotify and YouTube, that control how people consume music and, thus, who has access to AI-driven data. Independent artists own a small portion of all the music content available, but they gain data from direct-to-fan platforms like Hive or Pledge Music. Yet many recording industry professionals are just learning how to access and analyze emerging data tools to help maximize their profits. Here are four machine learning metrics that music industry professionals should use. Engagement data offers insight into how audiences respond to new music genres, trends, artists, and songs. It can show the number of collections, changes in followers, and the number of plays per payer, all calibrated by the number of saves or collections that include a specific song. Professionals from across the music industry can use this actionable engagement data to attract increased visibility for their signed artists, thereby reaching more fans. Music labels can target audiences and track patterns to make improved business decisions, all while stimulating revenue. By 2030, Goldman Sachs reports, streaming services will create $34 billion in revenue for the music business. These services will simultaneously generate a consistent and credible source of data that improves insight and outreach to various audience demographics. Each niche of the music industry has a specific need for data. Streaming services like Spotify use filtered data to transition non-paying listeners into paying subscribers. A major label, on the other hand, operates differently. A label’s goal is to create filtered data that can help them market songs and turn mediocre fans into dedicated superfans. Spotify tapped into this data by creating Found Them First, a microsite that allows users to see which musicians they listened to on Spotify before they became popular. For labels, this monetizes the idea of early fandom. Ultimately, these insights are used to motivate subscriber growth, driving fans’ desire to explore artists earlier in their careers.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

The ethics of AI in the shadow of GDPR

The ethics of AI in the shadow of GDPR | 7wData | Scoop.it
Days later my ears are still ringing from the booming baritone of the public-address announcer in the keynote session on the first morning at the IBM Think 2018 conference in Las Vegas. I keep wondering when the humming will subside, but with it a single thought continues to linger from the presentation that was delivered by IBM’s CEO, Ginni Rometty. That was the discussion point of the ethics of artificial intelligence or AI. Now this is not to say AI in the sense of the cute emoji that you may have made of your face on your mobile device. No, this is talking about AI that is managing the ebb and flow of the global supply chain or managing the routing of bags for your rebooked flights, as an example. Not some garden variety cruft. Artificial intelligence, or AI, is a system that demonstrates traits that can mirror human intelligence in some form or another. These traits can be associated with problem solving, manipulation of informational inputs, or in some cases can even mimic creativity. Obviously, your mileage may vary. AI is a type of intelligence that often evokes the specter of the movie Terminator. While this draconian post-apocalyptic vision of the future is what most think of when the subject of AI comes up, it really does do us a disservice. Case in point is Watson from IBM. This is an AI platform that can be trained to, as an example, ingest a request for a proposal document (RFP) and respond. Rather cool when you think of it. Now, in my days as a defender, I would have loved to have something like Watson as a SaaS offer that could do the inverse. Whereby I mean, it would read an RFP response from a vendor and for every time the wrong company was referred to in the document it would send 30 pizzas with anchovies to the CEO of that vendor’s house. While I’m being facetious, it was really amazing to see how often that would happen in my past roles. RFP’s are just one example. With all of that data processing there inevitably comes the question of how the data is going to be handled and secured. Data stewardship and the accountability for data is of paramount importance to do business today. Unfortunately, for many enterprises that have built up over the years, this has not always been the case.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Internet of things is re-defining the way business is done: Here’s why it’s time for total transformation

Internet of things is re-defining the way business is done: Here’s why it’s time for total transformation | 7wData | Scoop.it
The internet has undeniably impacted the lives of nearly everyone across the globe; be it a modern-day millennial or a baby boomer, everyone today feels the need to be connected and part of a network in this fast-paced world. It’s difficult to imagine an individual not subscribing to at least one social networking site. Connectivity is all-pervasive and it is all about collaboration and speed, even as we strive to deploy those values in our lives and organisations. A connected society encompasses the concepts of connected homes, buildings, work places and most importantly, industries which drive all major developments.The concept of a connected industry or the Industrial Internet of Things (IIoT) is inherent to companies building new products and applications that leverage the internet, using cutting-edge technology to develop simple solutions for complex requirements. For instance, a cloud-based robot today smoothly monitors and analyses thousands of phone calls at data centers, and generates a qualitative report at the end of the day even without any human intervention. The role of IIoT is becoming critical in industrial operations, where efforts are being made to increase productivity without compromising efficiency, at an optimal cost. IIoT is no longer an up-and-coming trend—it is already here, as we see a lot of manufacturers adopting IIoT enabled technologies. A recent survey—Data’s Big Impact on Manufacturing: A Study of Executive Opinions, jointly conducted by Honeywell and KRC Research—revealed that 70% of the 200 manufacturing executives surveyed said that they plan to invest significant resources in data analytics technology in the near future. This desire is driven by the strategic and financial value of their challenges: down time and related losses in efficiency, inadequate staffing, off-spec production and supply chain inefficiencies. It has already been proven that IIoT has the capability of resolving these challenges. IIoT, sometimes used interchangeably with other terms such as smart manufacturing, industry 4.0 and digitisation, has taken center stage in thought-provoking, future-minded conversations across the globe—and across industries. In another survey conducted by the Boston Consulting Group (BCG), business leaders across industry sectors acknowledged that spending on Internet of Things (IoT) technologies, apps and solutions is increasing rapidly and will reach approximately $267 billion by 2020, with 50% of IoT spending driven by discrete manufacturing, transportation and logistics, as well as utilities. The immediate short-term effect for companies adopting IIoT-based technologies and solutions is operational consistency and data monetisation. The long-term impact of it is utilising technological advancements to ensure an outcome-based, result-oriented economy. Another interesting development is cloud-enabled hosting. Hosting applications at the enterprise level is part of the new industrial world order.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Do we take data visualisation too seriously?

Do we take data visualisation too seriously? | 7wData | Scoop.it
It’s been a while since my last post – there’s a good reason for this. Well, a reason anyway. I have a 90% written post which I have been mulling over for a long time. Because I haven’t felt comfortable with it, or finished it, it’s been a bit of a logjam for the blog (a blogjam?), with subsequent blog posts and ideas taking a back seat. So, rather than abandon it, I’m going to smuggle that post, untitled, into the next two paragraphs, then move on with this post. Let’s make this blog more about data visualisation, and questions therein, than about me. The eventual point to this post might not be clear until much nearer the end – stick with me! The title of the aforementioned previous post was going to be “So how on earth did I become a Tableau Zen Master?” because last month, that happened. You can learn more about the Tableau Zen Master scheme here and the three elements that comprise selection to become one: Master, Collaborator, Teacher. I’ve done a lot of thinking about it – it really has left me scratching my head for a few weeks despite all the congratulations and positive comments I’ve had, and I’m still not sure I know the answer to the question of how I became one. But I’ve started do get through the impostor syndrome and I’m going to embrace it. Though I can’t mention impostor syndrome without linking to fellow Zen Bridget Cogley’s post about it – just promise me if you read it you’ll come back here. Get engrossed in her blog and you won’t feel the need for mine or any others … Anyway, there are only just 30 Zens in the world, and I know that I am not one of the top 30 most technical users (I don’t know how you’d quantify it, but I’m patently not even anywhere near the top several thousand). But I’ve come to realise it’s about more than that. I have to assume that people respect and like what I do and the manner in which I do it, share it and evangelise it. And more often these days I’m seeing amazing work which adapts and improves on mine, with three crucial words that I’ve grown to love: “inspired by @theneilrichards“. Hopefully I can embrace it with humility, because even by cutting a long introspective blog post down to two paragraphs I can still leave enough space to say a huge thank you to those who nominated me and those in Tableau who appointed me a Zen Master. Those who have given advice have all essentially said the same thing, to me and to all new Zen Masters: “Keep doing what you’re doing”. So, I will. For me, that’s visualising a lot, speaking a lot, and blogging a lot. So, perhaps as a result, I’ve been prolifically visualising. Two visualisations followed around last month’s Winter Olympics. First of all, my response to a #MakeoverMonday challenge to visualise historic medallist data was this joy plot below (and here’s more about joy plots) Hot on the heels of that was my curling-themed visualisation, which I’m obliged to figure and self-promote, because it has been featured as Tableau’s Viz of the Day Both of these are example of the direction in which I’m continuing to head (especially in the visualisations I choose to promote, anyway) of design driven data. Focussing on design elements first and choosing the data, whether it be the overall dataset or the elements within that dataset, to represent the visualisation choice. The joy plot in particular was a case in point. Partly liked and partly disliked/overlooked, I had one comment which described it as a beautiful work of data art. That continues to be my aim, and because I don’t claim it to be an analytical piece, that’s the feedback I can choose to included while ignoring those who dislike its analytical weaknesses! The curling visualisation has received a bit more acclaim and is another example of design driven data that I’m really pleased with.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Digital Identity is the key to the Blockchain Economy

Digital Identity is the key to the Blockchain Economy | 7wData | Scoop.it
This is Part 1/chapter 12 in The Blockchain Economy serialised book. For the index please go here. On Thursday I gave a talk about Digital Cooperatives at an event near Geneva in France that was organised by ARK IO (a French crypto platform that we had earlier covered here. I was impressed by the developer enthusiasm for the platform and plan to research it more). This event focussed on two big trends that I have covered in earlier chapters/posts: The event came after the news that France was making some serious moves to become a major jurisdictional venue for cryptocurrency investors (see news here). I thought this was the story, but really France is one more jurisdiction positioning in this space that seems to have a new entrant daily. The surprise takeaway from the event for me was how Digital Identity is the key to the Blockchain Economy. This chapter/post focusses on: The Identity Middleware in the Blockchain platform stack The Digital Identity Edge Cases that may drive change Investor Identity replaces the broker rolodex in the ICO market Our Identity is more important than our financial assets yet we give it all away Digital Identity on Blockchain is driven by three concepts 1. Your Digital Identity is an asset that you control through a private key. You control Digital Identity on Blockchain just like you control cryptocurrencies on Blockchain. 2. Nobody can change that data. Not you, not your government, not some corporation. Data can be appended, but never deleted or changed. Think of this like a private key to your bitcoin that allows you to view your crypto stash but not to magically make the stash bigger. Your biggest asset is safely under your control, but you cannot simply write your own history of yourself. 3. Granular control. You can reveal one part to one company and only that one part. The only person who sees the whole picture is you. For example, only you can see your health records, friends, financial records, political opinions and all the intelligence you gather by combining those data sets. The Identity Middleware in the Blockchain platform stack All the above may sound good enough in theory, but how does it work in practice? The short answer today is – badly. This is like using email in 1992 – going to the post office or fax machine was easier. The good news is that there are a lot of very smart people working on the problem and the prize is big and there is no scientific breakthrough needed. So we can be confident that this problem will be solved even if we don’t yet know by which company and in what timeframe. The general outline of the solution is becoming clear and was visible at the Ark ecosystem conference. The solution will come (as it usually does in software) through a 3 level stack: The reason that this is hard to see is that the Middle and Bottom are still being rolled out and no Top of stack UX services have made it super easy for consumers. The Crossing the Chasm model assumes is that these consumer UX services will first appear in edge cases, niches that most big vendors ignore (more on that in the next section). I arrived at the event as an Ark person was presenting the Middleware part that they call Persona. This was clever in two ways: The closest I have seen to this is what Consensys is doing with UPort on the Ethereum Blockchain. For years, anybody who talked about privacy was dismissed as a “privacy nut” and the refrain was that if you wanted privacy you must be doing something illegal. Three recent events indicate some change happening: However, to drive behavioural change, individuals must make Privacy an A List priority. Most people, even once they agree that privacy is important, don’t take action by doing something like using TOR or DuckDuckGo. To date, a lot of the people who did make it an A List priority were assumed to be doing it for terrible things such as terrorism or pedophelia.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

5 Strategies From Top Firms on How to Use Machine Learning

5 Strategies From Top Firms on How to Use Machine Learning | 7wData | Scoop.it
With machine learning making disruptive innovation easier than ever before, it's up to entrepreneurs to show the big kids how it's done. Machine learning is headed for a major growth spurt. After ticking past the $1 billion mark in 2016, the machine learning market is expected to hit $39.98 billion by 2025, according to a new report by Research and Markets.  Where will all that growth come from? Everywhere! Machine learning was born in 1959, coined by computer scientist Arthur Samuel -- but only recently has the larger business community come to understand its value. In the next few years, it will be adopted by everyone from Fortune 500 firms to mom-and-pop shops. Of course, the first challenge of machine learning is identifying a use case. Not sure where to start? To make the most of this explosive technology, consider how today's top companies, ranging in industry from retail to hardware to media, are using it: Retail giant Target discovered that machine learning can be used to predict not only purchase behavior, but also pregnancy. In fact, Target's model is so precise that it can reliably guess which trimester a pregnant woman is in based on what she's bought. After a father discovered through Target's persistent promotions that his 16-year-old daughter was pregnant, Target actually had to dial its initiative back by mixing in less specific ads. Most companies' promotions are driven by the seasons or holidays. Snow shovels go on sale in July, sunscreen in June. But consumers go through seasons in their own lives, too. The worst time to sell someone a car, for example, is right after she just bought one. It might be the best time, however, to market car insurance to that person. Machine learning can pick up on those rhythms, helping companies recommend their products to customers when the timing is just right. My company has used machine learning to spur loyalty purchases. We discovered that if a customer is going through a life event (such as graduation or marriage), he is more likely to change his behavior than at other times in his life. An education company that knows 20 percent of its users leave every May, for example, might use machine learning to refer likely grads to a corporate partner or sponsor. When someone posts a photo to Twitter, she wants people to see it. But if the thumbnail is 90 percent floor or wall, nobody is going to click on it. Twitter seems to have solved this problem by using neural networks. In a scalable, cost-effective way, the social media firm is using machine learning to crop users' photos into compelling, low-resolution preview images. The result is fewer thumbnails of doorknobs and more of the funny signs just above them. Give Twitter's thumbnail optimization a try for your next marketing campaign. Upload brand-aligned, user-generated photos, and let Twitter determine which elements of each image maximize engagement. Then, use the top-performing photo crops for your next Twitter campaign. Who doesn't love free market research?  We use machine learning to tweak images for conversational engagements. The challenge is making sure that rich images load quickly enough to keep up with a live conversation. A user who sends a question or search request expects a reply in the form of an image or GIF immediately. Using machine learning, we can deliver appropriate responses at scale in seconds.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

10 Steps to Detect Lateral Movement in a Data Breach

10 Steps to Detect Lateral Movement in a Data Breach | 7wData | Scoop.it
Many enterprises spend millions of dollars on solutions that promise to bolster their security. However, much less focus is placed on the ability to detect lateral movement during a breach. We’ve seen time and time again that once an attacker gains an initial foothold in a network, they will typically perform internal reconnaissance to solidify their presence. From this point onward, most attackers follow the same basics strategy – gain access to a lower privileged, less secured host, escalate privileges, and then begin seeking out additional targets on the network. If you can identify the attacker during lateral movement, it’s game over for them. Unfortunately, it’s not easy to dig deep into internal networks. When the amount of data generated is in petabytes, even the best data breach security solution will produce a large number of false positives. The problem is so severe that 55 percent of security alerts organizations receive are considered as erroneous. Hence, the irrelevance and volume of alerts lead enterprises to ignore or disable their logging solutions However, it doesn’t have to be that way. If you can set up barriers along the way, you may be able to protect against high-value breaches, or at least slow the adversary down enough that you’re ready to contain the outbreak. Here are ten steps you can take to detect lateral movement: Once inside a network, attackers prefer using native tools to avoid detection by EDR and anti-virus software. This is an anomaly that security teams can detect. Try to identify what tools your network administrators use and what resources they typically access, such as an Intranet site or an ERP database. With that information, you can spot discrepancies in the way administrative tasks are performed. Also, a combination of directory services like Active Directory and network information (NetFlow data) can help you winnow down the list of expected behaviors, and from that provide a benchmark for comparison. A significant challenge to all the indicators of a data breach is that they demand detailed analysis of data that can’t be readily accessed. Also, the security team must cross-reference a variety of information sources to gain insight. So, the best thing to give attention to is the login. By carefully monitoring login activity, you may be able to detect compromises before critical actions, such as data access and third-party compromise, take place. That makes login monitoring a pre-attack indicator – logon after hours or at a strange time of day can indicate lateral movement. Hackers love credentials to remain unidentified and ease their process. They steal user accounts and use them to gain privileges and explore the network. Therefore, analyzing credential usage can help you spot outliers. Moreover, log analysis from your authorization and authentication infrastructure can help you identify credential abuse. For instance, data extraction and analysis will give you a sense of how many devices each authenticated user interacts with. Baseline the average user, then look out for anomalies. One step an adversary usually takes is to identify what file servers can be broadly accessed to either encrypt confidential data remotely or extract essential data, such as credit card numbers or social security numbers. Therefore, discrepancies in file share access can be a vital indicator of lateral movement and may also lead you to a malicious insider. Monitoring and analyzing logs from your file servers is the most efficient way to do this yourself. If you’re using perimeter security tools, they may already be keeping tabs on command and control activity.
more...
No comment yet.
Scooped by Yves Mulkers
Scoop.it!

Transparency, responsibility and accountability in the age of IoT

Transparency, responsibility and accountability in the age of IoT | 7wData | Scoop.it
The Internet of Things market resembles the wild west with its rapid, chaotic growth and lack of effective oversight or security. Gartner estimates there will be 26 billion IoT devices connecting to the Internet by 2020 – an almost 30-fold increase from 0.9 billion in 2009. IoT device manufacturers and enterprise security providers face an enormous challenge trying to scale up the process of identifying and authenticating those devices. The confidential user data IoT devices collect and share fall under the same strict laws and regulations governing data security that all IT systems do, be they laptops, on-premises databases and cloud computing platforms. Adopting a "Secure by Design" approach to device manufacturing, and prioritizing users' privacy are key components in fostering transparency, responsibility and accountability in this Age of IoT. The IoT trend is transforming virtually every aspect of our lives for the better but connecting the ever-growing number of devices creates additional risks to enterprises and consumers. James Clapper, the U.S. Director of National Intelligence, warned of the risks of the IoT to data privacy, data integrity, or continuity of service in a report presented to the Senate Armed Services Committee that stated, "devices, designed and fielded with minimal security requirements and testing, and an ever-increasing complexity of networks could lead to widespread vulnerabilities in civilian infrastructures and US government systems." Consumers are right to be concerned about guarding their privacy. When an organization like a hospital connects a new MRI machine to its network, it creates a new cyberattack vector that hackers can use to access or steal data, and even gain control of the hardware itself. The Identity Theft Resources Center (ITRC) recorded 1,293 breaches last year - 21% higher than 2016 (the previous record-holder). It seemed like a massive data breach made headlines every week last year. So, it's understandable that expectations are high among government regulators and consumers that IoT device manufacturers, and the enterprises that deploy those devices, become better at securing confidential information. But that does not translate to a requirement that companies thwart 100 percent of all cyberattacks. Consider the EU's General Data Protection Regulation (GDPR), which takes effect May 25. It establishes very strict requirements for protecting customer data, and a tight 72-hour timeframe for reporting a data breach. But if a company can demonstrate it has taken adequate steps to protect information, and promptly notifies affected customers about a breach, it won't be fined for falling victim to the attack. What regulators and consumers do want to see from manufacturers and organizations that implement connected devices are the highest levels of transparency, responsibility and accountability. We saw the negative effects of a lack of transparency with the recent discovery of the Spectre and Meltdown vulnerabilities. U.S. lawmakers demanded that representatives from several technology companies explain why they waited months after discovering the vulnerabilities to make the details public. In other words, explain their lack of transparency. These companies have explained that they were taking time to assess the risk. They were concerned that premature disclosure would have given attackers time to exploit the vulnerabilities. That may be a valid argument, but the damage to their reputations was done. Effective IoT device security does not mean creating a perfect product that never has any vulnerabilities; it means allowing for a process that addresses quickly and completely all vulnerabilities. An organization must know that when a device is registered and attached to its network that it's legitimate and not fraudulent.
more...
No comment yet.