5G, IoT, Big Data, Analytics, AI & Cloud
191 views | +1 today
Your new post is loading...
Your new post is loading...
Scooped by Al Sedghi

Because Digital Smart City Healthcare and Medical Systems

Because Digital Smart City Healthcare and Medical Systems | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it
IoT-led innovations will drive a shift towards mature, value-based indicators for Smart City healthcare systems, improving efficiency and quality of life.
Al Sedghi's insight:
Smart Cities, by their very nature, produce significant amounts of data in their daily operations. IoT, Open Data are driving cities to collect and make available additional amounts of data – some static but increasingly large parts of it are real-time data. This data exhibits the classic characteristics of Big Data – high volume, often real-time (velocity) and extremely heterogeneous in its sources, formats and characteristics (variability). 

 This big data, if managed and analyzed well, can offer insights and economic value that cities and city stakeholders can use to improve efficiency and lead to innovate new services that improve the lives of citizens.

The growing technology that captures, manages and analyzes this Big Data, leverages technology trends such as cloud computing. Cities are now capable of accessing and using massive compute resources that were too expensive to own and manage a few years ago. Coupled with technologies like Hadoop/HDFS, Spark, Hive and a plethora of proprietary tools it is now possible for cities to use big data and analytical tools to improve the city. 

For example, Boston, USA is using big data to better track city performance against a range of indicators, and additionally identify potholes in city streets and to improve the efficiency of garbage collection by switching to a demand driven approach. New York has also developed a system (FireCast) that analyzes data from 6 city departments to identify buildings with a high fire risk. In Europe, London uses a wide variety of city data and advanced analytics to map individual neighborhoods to better understand resource allocation and planning which is made available through the Whereabouts service. In Asia, Singapore tracks real time transportation and runs a demand driven road pricing scheme to optimize road usage across the island.
No comment yet.
Scooped by Al Sedghi

Big Data and IoT - How The Future Of Analytics is Evolving | Analytics Training Blog

Big Data and IoT - How The Future Of Analytics is Evolving | Analytics Training Blog | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it
With each passing day, more objects and machines are getting connected to the internet transmitting the information for analysis. The objective is to harness this data to discover trends and results that can help any business with a positive impact. And why not, after all the future of technology lies in the hands of data … Continue reading "Big Data and IoT – How The Future Of Analytics is Evolving"
Al Sedghi's insight:
The biggest barrier facing enterprises considering IoT deployments will be knowing what to do with the massive amounts of information that will be gathered.

You will face different sources of data, anywhere from social media, sensors, embedded devices, etc. So, the challenge is to design for analytics – develop a strategy that you can see data more as a supply chain than a warehouse. There are going to be numerous unstructured data sources, so it is key to focus on collecting and organizing the data that you really need. 

For data analysis, the following are challenges to overcome. We need to worry about 5 Vs , Variety, Volume, Velocity & Veracity, and  Value 

Variety: defines the different types of data, which is growing every day. Structured data that has been regularly stored in databases is now being linked with unstructured data, including social media data, wearable data, and video streaming.

Volume: relates to the scale of data, how it’s obtained and warehoused. According to IBM, an approximate 2.5 quintillion bytes of data are created every single day. By the year 2020, there will be 300x more information in the world than there was in 2005, an approximate 43 trillion gigabytes. 

Velocity:  refers to the pace of data processing and

Veracity: is the ambiguity vs. the reliability of data. According to IBM, poor data quality costs the U.S. economy over $3 trillion dollars a year. 

Value:  relates to the ability of making data profitable by utilizing analyzed data to expand revenue and reduce cost.

It is evident that variety and volume of data that is being delivered through the networks is overwhelming; consequently, this impacts the velocity i.e. how quickly technology and enterprises can analyze the complete excess of information that’s collected. We should employ well-organized network connections as well as monitors and sensors to work out behavioral patterns and applications to structure these patterns. The key V to focus on is Value: monetizing on Big Data. 

Business Challenges 

We need to concentrate on analytics, accurate forecasting, evaluating/ implementing new tools / technologies, and getting real-time insights. We must determine how we can be profitable from something that impacts time, money and resources to keep up with. 

Business analytics: We must use analytics to grow our operational acumen and measure whether it’s working and who it’s working for. 

Accurate forecasting: Leveraging analytics we can make reliable implications about the future of the market. We understand what we did, so the next thing is how can we do it better? What’s next for our demographic? What can we conclude that our demographic will respond to? 

Real-time insights: are significant to ensuring that the analytics and forecasting are worth it. Time is crucial. Enterprises want to know what’s going on immediately so they can determine how to respond accordingly. Big Data can do this. 

Technology Challenges 

Workforce We must train the workforce that’s already in place and attract/hire top talent to fill the gaps. It’s a flourishing market for IoT / Big Data specialists. Specialists with expertise in such skills as VMWare, application development, open source technology, data warehousing and solid programming skills will be the ones to employ.

Infrastructure Agile IT is key in the business. From an architectural point of view, the fewer systems you have, the more agile you will be. Here is an ecosystem to focus on buildout:

 • To reduce cost, add modern systems to legacy systems initially. Use the existing architecture and practice an evolutionary approach to build the smart ecosystem of the future. No need to start from scratch; it will cost valuable time and resources. 

• Focus on build out of the infrastructure for business-critical applications. 

>>> Note that customer experience depends on high availability. 
>>> Ensure to plan for hardware, network and data management applications. 
>>> Design data storage to have the capacity to hold and update data at a low cost. 
>>>The network must be capable to cost-effectively transfer data to/from frameworks and architectures, while offering future growth.
 >>>Data management applications must be adaptable for processing, classifying and consuming vast amounts of data in real time.

• Analyze data for predictive analytics and directed marketing campaigns. 


Invest in established BI tools and applications. This will enable you to take advantage of the vast amounts of data that’s collected as well as gain valuable insight. Focus on applications that are easy-to-use and able to provide interactive interfaces to help you gain control and make informed decisions. 

Regulatory Challenges 

Be aware of current and new regulations that limit the use, storage and collection of certain types of data. Obtaining and analyzing data to improve business practices will help to increase revenue and decrease. How do you succeed Notwithstanding all the challenges, there are many prospects to monetize on IoT / Big Data and be ahead of the competition.  These include opportunities to: 

1) Grow revenue by tailoring advertising, products and services, and by leveraging predictive analytics to get real-time insights into customer behavior. 

2) Reduce costs through security, fraud prevention and network optimization.

3) Improve the Customer Experience by getting customer feedback, targeted and predictive marketing, and tailored products and services. 

To overcome challenges there must be collaboration across the three silos. For instance, the business and technology silos need to work together to discover synergies and enhance/automate IT and business processes for the most effective operation across the enterprise. Work together as a team and be smart.
No comment yet.
Scooped by Al Sedghi

Navigating a Successful Journey toward IoT and Big Data in the Cloud | IT News Africa – Africa's Technology News Leader

Navigating a Successful Journey toward IoT and Big Data in the Cloud | IT News Africa – Africa's Technology News Leader | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it

"It is predicted that by 2020, 25 billion devices will be connected to IoT and 600 zettabytes of information will be sitting in

Al Sedghi's insight:
IoT is more about devices, data and connectivity. The real significance of Internet of Things is about creating smarter products, delivering intelligent insights and providing new business results. 

As we go in future, millions of devices will get connected, internet of things will produce an enormous inflow of Big Data. The key challenge is envisioning and revealing insights from various categories of data (structured, unstructured, images, contextual, dark data, real-time) and in context of your applications.   

I believe gaining intelligence from Big Data using artificial Intelligence technologies is the key enabler for smarter devices and a connected world. 

The final objective is to connect the data coming from sensors and other contextual information to discover patterns and associations in real-time to positively impact businesses. Current Big Data technologies must be expanded with the goal to effectively store, manage and gain value from continuous streams of sensor data. 

In the case of connected cars, if 25 gigabytes of data is being sent to the cloud every hour, the main goal must be to make sense of this data, detecting data that can be consumed and rapidly acted upon to develop actionable events. The evolution of AI technologies will be key to grow insights rapidly from massive streams of data. 

With IoT, Big Data analytics would also need to move at the edge for real-time decision making; Here are some examples: i) detecting crop patterns in agriculture plants using drones, ii) detecting suspicious activities at ATMs or iii) predicting driver behavior for a connected car.
No comment yet.
Scooped by Al Sedghi

The Future Is Intelligent Apps – InFocus Blog | Dell EMC Services

The Future Is Intelligent Apps – InFocus Blog | Dell EMC Services | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it

Enterprises must pay attention to key application technology and architecture capabilities

Al Sedghi's insight:
Data gathering has advanced drastically. From automobiles, to smartgrids, to patient bodies, there are currently dashboards and platforms for collecting, organizing, and displaying detailed information that can offer a more improved set of data from gas consumption, to vital signs with sophisticated alert configuration. 

APIs and Open Source are essentially democratizing data in all these scenarios. Amid all this lies Machine Learning and Predictive Analysis. Two things can really cause a threat to putting machine learning to work: a) poor data quality and b) lack of data integration. The improvement of APIs and the trend of open sourcing can put an end to all these threats. There are a lot of open source projects on Github that software developers can leverage to integrate machine learning into their applications. Last year Google started releasing lower-level libraries like Tensorflow, which can be used in conjunction with others to entirely match the level of refinement a developer or data scientist is considering. For novice, there is a service like Amazon Machine Learning, which provides a simple UI for non-developers. 

Enterprises must pay attention to key application technology and architecture capabilities, such as data management services, user self-service, better sharable analytics, agile development leveraging latest PaaS and DevOps techniques.
No comment yet.
Scooped by Al Sedghi

The Modern Data Platform

The Modern Data Platform | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it

“The Modern Data Platform” starts off with a look back that builds the ground for the nowadays challenges to follow.

Al Sedghi's insight:
Here are some of the main factors for getting the Big Data Analytics platform right: 

1. Raging speed 

Opportunities around data are higher than ever. Business users and customers need results almost instantly, but meeting those expectations can be challenging, especially with legacy systems. Speed is not the only factor in executing a Big Data analytics strategy, but it is on top of the list. I was working with a customer running queries on a 10-terabyte data set. With this solution, that query would take 48 hours to come back with an answer and after 48 hours that question is almost moot. There's no benefit to having the answer that is received too late since the time to action has ended.

2. Massive—and growing—capacity

Your Big Data analytics solution must be able to adapt to huge quantities of data, but it must also be able to organically grow as that data increases. You must be able to grow your database in line with your data growth, and do it in a way that is clear to the data consumer or analyst. A modern analytics solution presents very little downtime, if any at all. Capacity and computer expansion happens in the background 

3. Easy integration with legacy tools 

A significant part of an analytics strategy is to ensure that it works with what you have—but also to identify which tools must be replaced, and at what point of time.. A lot of people have made investments in older tools (e.g. extract, transform, load (ETL) tools)  Of course, it is vital to support those legacy tools, but at scale, and as the need for data and analysis grows, you may realize that scaling those ETL solutions becomes a costly problem. It might make more sense to re-tool your ETL with a more modern and more parallel solution.

4. Working well with Hadoop

For many organizations, this open-source Big Data framework has become synonymous with Big Data analytics. But Hadoop alone is not enough. At the end of the day, Hadoop is a batch processing system, which means that when I start a job to analyze data, I go into a queue, and it finishes when it finishes. When you're dealing with high-concurrency analytics, Hadoop is going to show its weaknesses. You may have heard this: What's needed is a way to harness the advantages of Hadoop without incurring the performance penalties and potential disruptions of Hadoop.

5. Support for data scientists

Enterprises should help their experts—and most in-demand—data workers, by investing in tools that allow them to conduct more robust analysis on larger sets of data. What's important is that you want to move towards a solution where the data scientists can work on the data in place in the database. For instance, if they have SQL Server, they're pulling a subset or sample of data out of the database, transferring it on their local machine, and running their analysis there. If they are able to run statistical models in-database, they're no longer sampling, and thus will be able to get their answer much quicker. It's a significantly more efficient process.

6. Advanced analytics features

As organizations move toward predictive analytics, they have more needs and demand from their data technology. It's beyond just reporting. It's beyond the aggregates of the data in the data warehouse. You may be looking for a complicated query of the data in your database—predictive, geospatial, and sentiment focused.
No comment yet.
Scooped by Al Sedghi

Can artificial intelligence cataloging be the Google for enterprise big data?

Can artificial intelligence cataloging be the Google for enterprise big data? | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it

Can artificial intelligence cataloging be the Google for enterprise big data?

Al Sedghi's insight:
Unlike a Relational Database Management System (RDMS), when data is available in Hadoop (on HDFS), there is no assurance that the data, at a record level is “good,” or consumable. What’s more alarming is when retrieving this data (whether it is with Hive, Spark, Impala, etc.) the user will not be to know whether the data is “bad” or not. This is one of many challenges in what’s referred to as “on-boarding data”. Other key areas to consider include: organization as well as history management.

Having an intelligent data management platform offering self-service data analytics will help marketers understand their customers and find similar customers across various marketing channels, devices, ad campaigns, etc. 

This platform must have the ability to create models that progress from descriptive to predictive to prescriptive analytics. Users evaluate possible alternatives and predict outcomes through simulation analysis.

This leads me to the next point. Why is AI a revived trend? It is all about the three Vs: velocity, variety, and volume. Platforms that could process the three Vs with current and conventional processing models which scale horizontally provide ten to twenty times cost advantage over traditional platforms. 

We will be able to realize the highest value from applying AI to high-volume repetitive tasks when uniformity and stability is more effective than getting individual intuitive oversight at the expense of human error and cost.
No comment yet.
Scooped by Al Sedghi

Taming Big Data with Spark Streaming for Real-time Data Processing

Taming Big Data with Spark Streaming for Real-time Data Processing | 5G, IoT, Big Data, Analytics, AI & Cloud | Scoop.it
Learn how Spark Streaming capabilities help handle big and fast data challenges through stream processing by letting developers write streaming jobs.
Al Sedghi's insight:
Spark is a technology well worth considering and learning about. It has a growing open-source community and is the most active Apache project at the moment. 

Spark offers a faster and more general data processing platform and  lets you run programs up to 100x faster in memory, or 10x faster on disk, than Hadoop.  Other reasons to consider using Spark is that It is highly scalable, simpler and modular, This could be why it is being adopted by key players like Amazon, eBay, Netflix, Uber, Pinterest, Yahoo, etc..

Last year, Spark took over Hadoop by completing the 100 TB Daytona GraySort contest 3x faster on one tenth the number of machines and it also became the fastest open source engine for sorting a petabyte. Spark also enables you to write code faster as you have over 80 high-level operators at your disposal. 

Additional major features of Spark include: 

- Provides APIs in Scala, Java, and Python, with support for other languages (such as R) on the way 

- Integrates well with the Hadoop ecosystem and data sources (HDFS, Amazon S3, Hive, HBase, Cassandra, etc.)

- Can run on clusters managed by Hadoop YARN or Apache Mesos, and can also run standalone

Overall, Spark simplifies the challenging and compute-intensive task of handling high volumes of real-time or archived data, both structured and unstructured, effortlessly integrating relevant complex capabilities such as machine learning and graph algorithms.
No comment yet.