Research conducted by Accenture and the G20 Youth Entrepreneurs’ Alliance (G20 YEA) has discovered that digital entrepreneurs are building real opportunity for the future of youth jobs in G20 countries.
Why Is Entrepreneurship At Record Lows? [ANALYSIS]ValueWalkSintetia: The aim of your book is, as you state, to define a program for research in the intersection of the theory of the firm and entrepreneurship.
The Food and Drug Administration (FDA), which regulates everything from heart monitors to horse vaccines, will soon have its hands full with consumer health apps and devices.The vast majority of the health apps you’ll find in Apple’s or Google’s app stores are harmless, like step counters and heart beat monitors. They’re non-clinical, non-actionable, and informational or motivational in nature.But the next wave of biometric devices and apps might go further, measuring things like real-time blood pressure, blood glucose, and oxygen levels.More clinical appsThe FDA is charged with keeping watch on the safety and efficacy of consumer health products. Lately, that includes more clinical apps as well as devices you might buy at the drugstore, like a home glucose testing kit.“It’s these apps that the FDA says it will regulate,” David Bates of Brigham and Women’s Hospital and Physicians Organization told VentureBeat in June. These apps will have to go through the full 510(k) process,” he said.Dr. Bates chaired a group to advise the FDA on how to review health apps for approval, and on how the FDA should advise developers.“It was intended to help them think through the risk factors involved with these products and then give guidance on how to stay within the guidelines,” he said.“The device makers were asking from some guidance from The FDA on what types of things would be accepted and what wouldn’t,” Bates said.Bates believes the FDA wants to use a light regulatory touch when looking at new medical devices. “The FDA definitely wants innovation to continue in clinical devices,” he said. “In general the FDA knows that the vast majority of apps are just informational.”The FDA’s final guidance focuses on a small subset of mobile apps that present a greater risk to patients if they do not work as intended.Health apps go mainstreamThe big software companies (Apple, Google, and Samsung) have brought attention to, and lent credibility to, apps and devices that do more than count steps. These companies are building large cloud platforms designed to collect health data from all sorts of health apps and devices.more at http://venturebeat.com/2014/07/21/health-apps-are-changing-so-must-the-fda/
After decades as a technological laggard, medicine has entered its data age. Mobile technologies, sensors, genome sequencing, and advances in analytic software now make it possible to capture vast amounts of information about our individual makeup and the environment around us. The sum of this information could transform medicine, turning a field aimed at treating the average patient into one that’s customized to each person while shifting more control and responsibility from doctors to patients.The question is: can big data make health care better?“There is a lot of data being gathered. That’s not enough,” says Ed Martin, interim director of the Information Services Unit at the University of California San Francisco School of Medicine. “It’s really about coming up with applications that make data actionable.”The business opportunity in making sense of that data—potentially $300 billion to $450 billion a year, according to consultants McKinsey & Company—is driving well-established companies like Apple, Qualcomm, and IBM to invest in technologies from data-capturing smartphone apps to billion-dollar analytical systems. It’s feeding the rising enthusiasm for startups as well.Venture capital firms like Greylock Partners and Kleiner Perkins Caufield & Byers, as well as the corporate venture funds of Google, Samsung, Merck, and others, have invested more than $3 billion in health-care information technology since the beginning of 2013—a rapid acceleration from previous years, according to data from Mercom Capital Group. more at http://www.technologyreview.com/news/529011/can-technology-fix-medicine/
The myth of the highly successful college (or even high school) dropout has lodged itself firmly in the startup scene. (RT @mashable: What Level of Education Do Startups Look For When Hiring? It Depends.
Joining a startup or starting a new business can be quite challenging. You need to have a clear idea how startups work if you want to grow your startup on a solid foundation. This infographic by MBAPrograms.org and oBizMedia serves as a primer on startups:
Should Startups Really Skip Legal Protection?ForbesA recent article in the New York Times discussed the trend toward not using Non-Disclosure Agreements (NDAs) with a variety of players in the startup ecosystem.
Here's where startups can partner with Cincinnati's biggest companies Cincinnati Business Courier Cincinnati's largest companies are looking for startups to partner with in the inaugural Innovation Xchange expo in September.
Freddy Nurski's insight:
Good example of how large corporates can access innovation
The health care industry is ripe for disruptive innovation as systemic challenges continue to face the industry and stakeholders demand increased value. He (How and where might disruptive innovation occur in #healthcare?
Few healthcare IT policies these days are as delicate, sensitive and potentially emotionally explosive as efforts to restrict or regulate employee social media activity. And yet hospital hierarchies are routinely stepping on these political minefields as providers try to protect their reputations.Consider a recent incident at the 2,478-bed New York Presbyterian Hospital.An ER nurse posted a photograph of a trauma room – no staff or patients were in the picture – after caring for a man who had been hit by a subway train. The caption: "Man vs. 6 train." The image simply showed a room that had seen a lot of action moments before. The veteran nurse was fired after the incident, according to an ABC News report, not because she had breached hospital policy or violated HIPAA, but, as she put it: "I was told I was being fired for being insensitive."This legitimately raises key issues around what a hospital's social media policy should be. This specific incident, though, appears to be an impressively poor choice for the hospital to have selected to make its stand. First, there really was no privacy issue at play. The photo shows nothing more than a slightly messy trauma room. The caption is vague and is hardly worse than a police officer posting a car accident image, with a note warning people against drinking/texting while driving. (To be precise, the injured car would be recognizable to the patient along with friends and family, especially if a license plate were visible, whereas a generic trauma room photo isn't.)An even bigger problem with using this incident is that the nurse, Katie Duke, didn't even take that photograph. It was taken by a staff doctor and the doctor had posted it on the doctor's Instagram page. Nurse Duke had merely reposted it. The consistency killer? The doctor "was not reprimanded," ABCNews reported. To be fair, it's not clear whether the doctor's post included the "Man vs. 6 train" comment. Given that it appears that the comment – as opposed to the image – is the trigger here, the hospital's disciplinary process may or may not have been inconsistent.Let's get back to the social media policy issues. I would hate to have to issue a concrete definition – acceptable to our friends in Legal – of "sensitivity." What if there had been no image and the nurse had simply said something like "A grim reminder at the ER today about how dangerous and deadly subways can be. Don't take any chances – ever." Is that insensitive? And if not, how is it meaningfully different than what Duke reposted? She specified the subway line, which, by itself, isn't insensitive nor especially revealing. And she used an image of the trauma center, which showed nothing. Would it have made any difference had she posted a generic trauma center image from Google Images?If no patient or hospital privacy has been violated, what is the issue? The issue is that she was accused of violating hospital policy. We've now gone full circle. What is reasonable to ban, as long as no one's privacy nor hospital confidentiality is violated? (Classic hospital confidentiality: "Wow, my hospital is getting away with amazing markups. We just charged a patient $XXX for something, and I saw the paperwork that we only paid $X for it." The employee would be using information that she/he could only know because she worked there. That's a fine violation.)Can a hospital ban employees from saying anything hospital-related on their social media posts? What if it's entirely positive, as in "Our surgical team is brilliant. We saved patients today that most surgeons would have lost"?Here's the IT nightmare. What if the hospital says, "We're going to decide this on a case-by-base basis"? Danger, Will Robinson! Danger! Then things fall to IT to become the social media police. Are you to then track every social media feed of every employee and to then – gulp – review every posting for appropriateness? And somehow management thinks that this action will avoid lawsuits?The simple fact is that a social media policy that covers what an employee does in his or her personal time is highly problematic. On the other hand, there certainly is social conduct that has to be dealt with. What if a doctor set up a site that identified herself as working at this identified hospital and then said how her team tries to inflict as much pain as possible and that they then place bets on when different patients will scream or pass out?The easiest route from an administrative perspective – but certainly not from a legal perspective – is to adopt something akin to the Pentagon's infamous Don't Ask Don't Tell. That would be a program where there was zero effort to uncover such naughty social posts (as IT breathes a major sigh of relief) but a strict policy for punishing employees and contractors who engaged in bad behavior that the hospital happened to learn of.The problem is that it leads to inconsistent punishment – with most people never getting caught – and to even vindictive behavior, with employees reviewing the social posts of a rival, looking for anything that might get them into trouble.That policy might simply prohibit posts that reflect poorly on the hospital, which is vague enough to allow senior management to make customized decisions. As long as healthcare doesn't decide that it needs to proactively check on all posts – a thankless task that would almost certainly fall to IT, which would try to automate much of that assignment from Hell – any concrete policy is better than none.