When Google unveiled Android, it hoped it would make good quality, touchscreen smartphones accessible to everyone. To achieve this, it took the unprecedented step of making its new mobile OS open source, encouraging anyone to contribute - users and manufacturers alike
Object.observe() is still unofficial; it's so far only incorporated into Chrome, which means developers who use it can't count on it working their apps working in others browsers such as Firefox or Apple's Safari. And it's not clear when—or even whether—other browser makers will jump on the Object.observe() bandwagon.
Being stuck with a graphic that was created for a different purpose or platform can be discouraging. Knowing how to resize that graphic with sharper, crisper edges can make all the difference in the world.
The underlying technology behind mobile phones, tablets and even laptops and desktops is constantly changing. And changes to the screen sizes, orientations and resolutions of their screens can at times make a graphic artist go insane.
More often than not, the graphics that have been used in the past were created and sized for a specific purpose and just do not seem to look right when repurposed on a new device or platform. It is likely that many graphics were sized for the web and were right-sized and optimized for faster downloads of pages.
Unfortunately just resizing an image to a larger size does not always produce the results one expects. Sometimes you are fortunate enough to have all of the base elements used when the graphic was originally created and can ‘re-cut’ the graphic for a new purpose. Other times you may just have to start over from scratch or hire a professional designer.
There is, however, one technique that can work when all you want to do is simply repurpose a good design for a different use. If that is the case, then the following will help recycle your existing graphics used with older technologies, for new platforms that you would like to move to in the future.
When Microsoft purchased Nokia it became a Android hardware manufacturer through the Nokia X line. But soon Microsoft will shift those devices to Windows Phone and the Lumia brand.
In the memos written by CEO Satyella Nadella and mobile device chief Stephen Elop announcing layoffs at Microsoft, both executives also announced a subtle change in strategy: Microsoft is planning to change the operating system of itsAndroid-powered Nokia X line to its own Windows Phone. Shocker.
According to Elop, Microsoft will continue “to sell and support existing Nokia X products” but plans to immediately shift “select future Nokia X designs and products to Windows Phone devices.” The Nokia X1, which was released earlier this year, was likely conceived and developed before Microsoft officially took over Nokia. The Nokia X2, its successor, was announced by Microsoft last month, and it will still run Nokia’s version of Android, but future low-cost devices most likely will not.
The fact that demand for Node.js jobs has grown so starkly since 2011 indicates the framework’s potential to remake the programming landscape. As need for their talent grows, Node.js developers could be some of the most highly sought after in the future.
An increasingly mobile Web has paralleled the rise of Node.js. Mobile devices make up at least 30% of total Web traffic, and Node.js is a framework with a lot to offer mobile app developers.
Mobile apps are designed to serve Web pages to mobile users. Most of the heavy lifting goes on in the back end of a mobile app, where websites are made available and managed. That means back-end frameworks, like lightweight Node.js, are enjoying a moment in the spotlight.
Node.js makes a great back-end framework for mobile development because its core purpose is to respond to network requests. The way this works on mobile is that iOS, Android and other mobile web clients connect to NodeJS over HTTP to send requests through an API.
Larger technology companies have already noticed the appeal. In 2011, LinkedIn swapped Ruby on Rails for an overhauled mobile app running Node.js.
The spike in job listings is an indication that still more companies are hoping to adopt Node.js into their mobile plans. Developers, it’s time to list “Node.js” higher up on your resumes.
Few organizations have strong opinions and articulated policies on what a check-in should consist of. As long as the check-in is more or less usable in a code review, it's generally considered good enough. We can do better than this by making the contents of check-ins truly useful additions to the development process.
The central unit of work within any version-control system (VCS) is the check-in: committing one or more files into a repository. This records the current state of the files and preserves a record of their history. In this article, I examine the wide range of check-ins — from the minimal to the overly large — and identify the elements of a useful check-in and the costs of doing check-ins wrong.
A check-in is an atomic operation that makes previously isolated changes visible to other users. A check-in can affect several files to keep the project consistent, just as a transaction can update multiple records and tables at once. (To be accurate, some legacy VCSs such as CVS or Visual Source Safe only allow check-ins of single files. However, all modern VCSs support atomic commits across several files.)
Ideally, every check-in should move the project from one consistent buildable and tested state to the next. The VCS ensures that the history of these changes is stored durably.
A check-in differs from a database transaction in that it preserves the history of changes, including the author, and usually some documentation (a comment or link to a change request).
Similarity to Storytelling
Together, all check-ins tell the story of how a project has developed. The best stories have a strong theme, a fascinating plot, a fitting structure, unforgettable characters, a well-chosen setting, and an appealing style.
A good check-in is like a sub-plot with its own theme; it makes it easy and interesting for others to read and understand the purpose and execution of each change.
Amazon Web Services today announced several new capabilities to make it easier for developers to build, deploy, and scale mobile applications. Amazon Cognito is a new service that provides simple user identity and data synchronization that lets developers create apps that authenticate users through popular public login providers, and then keep app data such as user preferences and game state synced between devices.
The new Amazon Mobile Analytics service allows developers to easily collect and analyze app usage data, up to billions of events per day from millions of users, and delivers usage reports within an hour of data being sent by the app. AWS is also introducing a new unified Mobile Software Development Kit (SDK) that makes it easy for iOS, Android, and Fire OS developers to access the new Amazon Cognito and Amazon Mobile Analytics services as well as popular AWS services like Amazon S3 and Amazon DynamoDB.
Today, many app developers around the world use the AWS Cloud as infrastructure building blocks for the back-end services that power their mobile applications. Still, these mobile app developers have had to spend valuable time on undifferentiated heavy lifting like connecting apps to storage and database services and integrating core functionality such as authentication, user management, notifications, and usage data analytics. With Amazon Cognito, Amazon Mobile Analytics, and the AWS Mobile SDK, developers are now able to focus more of their energy on what matters, the differentiated functionality of their app that attracts and retains end users.
With AWS Mobile Services, developers can:
Securely store, manage, and sync user identities and data (Amazon Cognito)
Quickly access and understand app usage data (Amazon Mobile Analytics)
Easily connect apps to AWS services (AWS Mobile SDK)
Send notifications, updates, and promotions across platforms (Amazon SNS)
When writing code, these 7 coding tasks are ones you should probably not write yourself
As programmers, we like to solve problems. We like to get ideas to spring from our heads, channel through our fingertips, and create magical solutions.
But sometimes we are too quick to jump in and start cranking out code to solve the problem without considering all the implications of the issues we’re trying to solve. We don’t consider that someone else might have already solved this problem, with code available for our use that has already been written, tested, and debugged. Sometimes we just need to stop and think before we start typing.
An alliance of big tech companies has formed to create standards for communications related to the Internet of things and all electronic devices.
The Open Interconnect Consortium wants to deliver an open-source specification for wirelessly connecting devices. The members include Atmel, Broadcom, Dell, Intel, Samsung, and Intel’s Wind River embedded-software division. The group seeks to accelerate the development of the Internet of things.
The Open Interconnect Consortium’s first open-source code will target smart homes and office solutions.
The Internet of things is expected to consist of 212 billion devices by the year 2020, including PCs, smartphones, tablets, wearables, and a variety of home and industrial appliances, according to market researcher International Data Corp. But to achieve that, chip makers and others in the electronics food chain must agree upon how to connect wireless devices together.
The companies want to create a communications framework based on industry standard technologies to wirelessly connect and manage the flow of data among devices. They want the communications to work regardless of form factor, operating system, or service provider.
Teaching coding languages and skills is a critical need in today’s technology-infused society, but we’re falling behind in the talent wars.
Even though programming jobs are some of the best paying in the world, the gap of qualified developers and programmers is only projected to increase in the next several years. In fact, it’s estimated that there will be 1 million jobs left vacant by 2020 because of this alarming lack of qualified developers.
The lack of qualified talent in the computer science field has created fertile ground for the growing number of coding boot camps popping up across the nation. In fact, a recent study found that this year, the number of boot camp graduates is expected to triple from last year’s numbers, yielding nearly 6,000 graduates.
These intensive, multi-week full-time courses claim that they provide students with the necessary skills they need to join the world of developers.
While boot camps can assist with providing new skillsets and helping fill the talent gap, they are still somewhat limited in what they can offer. Here are a few reasons why e-learning is a better alternative:
You have seen examples of some really useful add-ons for Google Docs but wouldn’t it be great if you could write your own add-on, one that adds new features to your Google Docs, one that makes you a rock star among the millions of Google Docs users.
Create a Google Add-on for Docs & Sheets
This step-by-step tutorial (download) will walk you through the process of creating your own add-on for Google Docs. The add-on used in the demo lets you insert a image of any address on Google Maps inside a Google Document without requiring any screen capture software.
Not all cloud infrastructure is the same, as David Mytton discovered when he started looking into Google Cloud. The service differs markedly from AWS and SoftLayer in these five key ways.
Amazon has set the standard for how we expect cloud infrastructure to behave, but Google doesn’t conform to these standards in some surprising ways. So, if you’re looking at Google Cloud, here are some things you need to be aware of.
1. Google Compute Engine Zones are probably in Ireland and Oklahoma2. Google’s Compute Zones may be isolated, but they’re surprisingly close together3. Scheduled maintenance takes zones offline for up to two weeks
4. You cannot guarantee where your data will be located
Mozilla, Cisco, Akamai, the Electronic Frontier Foundation, IdenTrust and researchers at the University of Michigan are working through the Internet Security Research Group to create a new certificate authority to offer digital certificates for free to anybody who owns a web domain. The “Let’s Encrypt” group will launch this service next summer.
Currently, the EFF writes today, “HTTPS (and other uses of TLS/SSL) is dependent on a horrifyingly complex and often structurally dysfunctional bureaucracy for authentication.”
The Let’s Encrypt project aims to make getting certificates not just free, but also as easy as possible. It will take two simple shell commands to enable HTTPS for any given site that wants to use it. All of the certificates that are issues or revoked will be public and the team aims to make its protocols an open standard that other certificate authorities can adopt.
Developers who want to test the service can head over to https://github.com/letsencrypt/lets-encrypt-preview to take a look at the code, but this is definitely not meant for production servers yet and if you decide to ignore that warning, chances are your users will see lots of warnings about your certificate that will keep them from ever seeing your site.
Picture yourself at a work event. What are you wearing? What are you talking about? How loud are you talking? If you indulge at all, how much have you had to drink? Now picture yourself on a weekend trip with a group of friends.
We won’t go into details, but things look a little different, don’t they? We all change behaviors based on our environment. Physical location and surroundings have a lot to do with our mindset, and can influence how we do just about everything.
Behavior on a mobile phone vs. a desktop computer is no exception. Your physical location, state of mind and desired outcomes can be profoundly different depending on which device you are using, yet recent efforts to adapt desktop sites to mobile often ignore these differences and simply scale the online experience to a smaller screen. The result is a degraded end-user experience that may not meet the needs of a mobile environment, as well as disappointing outcomes for marketers and consumers.
A Brief Explanation: Responsive vs. Mobile Web
At the most basic level, it’s the difference between having one website or two. Responsive design allows the layout, scale and orientation of the desktop site to be adapted to a mobile viewing experience. The content served up to the user is the same as on a desktop site, and while they layout is organized to accommodate a smaller screen, it is important to remember that the integrity of the desktop site is intended to remain as true to form as possible and any changes to the desktop site will also affect the mobile site. Responsive design is concerned only with size and scale, not with the end user’s device type or presumed environment.
A mobile website is separate and distinct site from the desktop site, and must be maintained as such. It is designed to cater to the mobile experience, and makes the assumption that the end user has different objectives than they would on a desktop site. This means the mobile site may not offer the full scale of content served up on the desktop version, and the options presented on the landing page may be refined accordingly.
Which is better? Well, it depends
Going back to the work party vs. weekend with friends example, it’s clear we adapt our actions according to our environment. However, the case can be made that there are some things we do no matter where we are. Here are some examples that seek to make the case that the suitability of a responsive or mobile site depends entirely on whether the people using your site are changing behavior based on their environment – or not.
In other articles we looked at how to build a cross browser video player using the HTMLMediaElement and Window.fullScreen APIs, and also at how to style the player. This article will take the same player and show how to add captions and subtitles to it, using Web_Video_Text_Tracks_Format and the track element.
HTML5 and Video Captions
Before diving into how to add captions to the video player, there are a number of things that we will first mention, which you should be aware of before we start.
Captions versus subtitles
Captions and subtitles are not the same thing: they have significantly different audiences, and convey different information, and it is recommended that you read up on the differences if you are not sure what they are. They are however implemented in the same way technically, so the material in this article will apply to both.
For this article we will refer to the text tracks displayed as captions, as their content is aimed at hearing people who have difficulty understanding the language of the film, rather than deaf or hard-of-hearing people.
A startup called PredictionIO has raised $2.5 million in seed capital to help it try and make a business out of open source machine learning software. Unlike previous open source projects, though, PredictionIO is designed to be easy to get started with and use, even by developers who aren’t data scientists.
PredictionIO claims developers can be writing predictive models for their applications in minutes, primarily it seems around things such as recommendation and personalization. The software is available as a download or as cloud instance on Amazon Web Services. The company itself is part of three startup accelerators – MozillaWebFWD, 500Startups and StartX.
Machine learning is a potentially lucrative software market, and PredictionIO is tackling it by trying to split the difference between open-source and proprietary tools. Open source software is popular — in machine learning that includes projects such as Mahout, scikit-learn and, at some point, Oryx — but often hard to deploy and use. Commercial software is getting much better — with the release of products likeGraphLab Create and Microsoft’s new Azure machine learning service — but can be too much like a black box, PredictionIO contends.
After a year-long “pilot program,” server software maker Nginx is officially launching several paid services on Amazon Web Services (AWS) today. While Nginx Plus has been available (if not promoted) for the whole pilot program, there are two new additions to Nginx’s cloud product lineup: support for its streaming media server and annual subscriptions for all its services.
Now sites and applications using Amazon’s Elastic Compute Cloud (EC2) can use Nginx’s streaming media server as a video streaming solution. A module extension to Nginx’s web server product, Nginx streaming supports “all common video formats” — from MP4 and FLV to Apple HLS and Adobe HDS — and can adjust the quality of a video on the fly based on the speed of a connection.
Nginx Plus, Nginx’s other AWS product, is the commercial version of the popular open-source, Linux-based Nginx web server technology.
Essentially, it’s web server and networking software that you can embed inside an app. The company touts features like application load balancing, advanced cache control, and monitoring tools as reasons why AWS customers should upgrade from the free version.
“Building a great application is half the battle, delivering it is the other half … so our focus is all about application delivery,” Robertson said.
Nginx has brought annual subscription prices to its AWS customers, offering an Nginx Plus instance for $1,500 annually rather than a $0.04 per hour base price (based on EC2 instance type), and streaming media server support for $700 annually versus a $0.10 per hour flat rate.
Paying an annual subscription up front can be cheaper than paying for each service hourly, while customers who want to hedge their bets can use a combination of both annual and hourly services.
At the time of writing (July 2014), Python is currently the most popular language for teaching introductory computer science courses at top-ranked U.S. departments.
Specifically, eight of the top 10 CS departments (80%), and 27 of the top 39 (69%), teach Python in introductory CS0 or CS1 courses.
Python is the most popular language in this list. It narrowly surpassed Java, which has been the dominant introductory teaching language over the past decade. Some schools have fully switched over to Python, while others take a hybrid approach, offering Python in CS0 and keeping Java in CS1. However, at the high school level, Java is still used in the AP (Advanced Placement) curriculum.
The next most popular language is MATLAB, which is often used in CS0 courses to introduce scientists and engineers to programming. C++ is next on the list, but it's been firmly supplanted by Java over the past decade. The high school AP curriculum even replaced C++ with Java in 2003. C is just as popular as C++ in this list, but some introductory courses that use C (such as Harvard's CS50) teach it alongside other languages rather than having it be the sole language.
Scheme-based languages are popular amongst a devoted subset of educators and programming language researchers. Most notably, two (somewhat rival) philosophical camps -- SICP and HtDP -- have created acclaimed textbooks and courses around the Scheme ecosystem. But in recent years, Scheme has been phased out in favor of Python at places such as MIT and UC Berkeley. It's being used in only 4 schools in this list.
Scratch is the only visual, blocks-based language that made this list. It's one of the most popular languages of this genre, which include related projects such as Alice, App Inventor, Etoys, Kodu, StarLogo, andTouchDevelop. The creators of these sorts of languages focus mostly on K-12 education, which might explain why they haven't gotten as much adoption at the university level.
Finally, note that three interesting sets of languages didn't make it on this chart because they were used in either zero or one university in our sample:
Statically-typed functional languages such as Haskell and OCaml, which are popular amongst PL researchers
Widely-used industry languages that are commonly associated with specific proprietary platforms, such as Objective-C (Apple) and C#/Visual Basic (Microsoft)
If we revisit this analysis in five, ten, or twenty years, which language will be in the lead then?
What makes a good programmer? It’s an interesting question to ask yourself. It makes you reflect on the craft of software development. It is also a good question to ask your colleagues. It can trigger some interesting discussions on how you work together. Here are five skills I think are crucial to have in order to be a good programmer.
1. PROBLEM DECOMPOSITION
Programming is about solving problems. But before you write any code, you need to be clear on how to solve the problem. One skill good programmers have is the ability to break the problem down in smaller and smaller parts, until each part can be easily solved. But it is not enough simply to find a way to solve the problem. A good programmer finds a way to model the problem in such a way that the resulting program is easy to reason about, easy to implement and easy to test.
Some of the most complicated programs I have worked on were complicated in part because the implementation did not fit the problem very well. This led to code that was hard to understand. When the problem is well modeled, I agree with Bernie Cosell (interviewed in the excellent Coders at Work):
“…there are very few inherently hard programs. If you are looking at a piece of code and it looks very hard – if you can’t understand what this thing is supposed to be doing – that’s almost always an indication that it was poorly thought through. At that point you don’t roll up your sleeves and try to fix the code; you take a step back and think it through again. When you’ve thought it through enough, you’ll find out that it’s easy“.
2. SCENARIO ANALYSIS
Good developers have the ability to consider many different scenarios for the program. This applies both to the logic in the program, and to the internal and external events that can occur. To consider the different paths in the logic, they ask questions like:
What happens if this argument is null? What if none of these conditions are true? Is this method thread-safe? To discover what types of events the software needs to handle, they will ask questions like: What if this queue becomes full? What if there is no response to this request? What if the other server restarts while this server is restarting?
The good programmers ask themselves: How can this break? In other words, they have the ability to think like testers. In contrast, inexperienced programmers mostly only consider the “happy path” – the normal flow of control when everything goes as expected (which it does most of the time). But of course, the unexpected inevitably happens, and the program needs to be able to cope with that.
HTML5 is taking web documents to a next level, by adding semantics. HTML5 contains several semantics elements but they are not enough to annotate your content. You can tag your content with Microdata to build a better web document which can be understood by machines.
Need for Semantics
• Machines cannot understand the content and the context.
• Making sense out of the web content is too hard for machines.
• If they have to understand everything, they have to be able to understand natural languages, every language.
• So semantics were helpful to provide meaning to the web content and help understand the content by annotating them.
Need for Microdata
• HTML5 is not only about new presentational elements. It adds several semantic tags.
• Everyone comes up with their need for new semantic elements. It’s not practical to put an element in specification to accommodate each and every semantics to provide relevant meaning.
• New formats like Microdata, Microformats and RDFa for data markup were created to accommodate such needs.
If Android apps were good enough for BlackBerry, might they appeal to Microsoft too? Yes, says one tipster, claiming that all Lumia phones will be able to run Android apps in the future. It’s certainly possible but is it worth it?
As Microsoft continues to evolve Windows Phone in hopes of greater market share, its latest trick could be to support Android apps on its handsets. That’s the latest rumor, according to Eldar Murtzin, a long-time industry insider who has a few correct predictions to his name. Murtzin tweeted the information on Tuesday morning, sounding pretty certain:
The cloud market is nothing if not fluid. Businesses must weigh concerns sparked by Edward Snowden’s disclosures of government scooping up customer data from cloud providers and the agility and flexibility that cloud computing offers. There is appeal in renting, not buying, compute and storage, especially for spiky workloads but … you have to worry about all that data.
Given all that, it the move to cloud still seems inexorable.
Last week, the federal Centers for Medicare and Medicaid Services (CMS) told health insurers that they can use Amazon Web Services to store data they need to share with it, according to CNBC.com. AWS storage costs would be between $6,000 and $24,000 annually per insurer.
According to CNBC, Aaron Albright, CMS spokesman said:
“Based on feedback from stakeholders, CMS is offering issuers a new option for data reporting under the risk adjustment and reinsurance program … Issuers may select the option that best works for them for reporting data that is expected to begin later this year.”
This was probably welcome news to some, but not all, of the insurers since some had already bought the hardware they were told they’d need to handle their workloads.
Storing documents on file sharing services like Dropbox and Google Drive has become a common practice online in the last five years. In that time, as people create, edit and hoard older data files, they find they are running short of the free space included with an account.
With more and more people opting for either a tablet-only existence or switching from a traditional desktop computer with multiple internal drives to a laptop with a much smaller SSD drive, finding an alternate storage system is important.
At a cost of between $0.05 and $0.10 per gigabyte per month for additional online storage, you can spend anywhere from $500 to $600 per year for just 1 terabyte. As you will see, a more economical solution is to own your own personal cloud hosted on your home network.
It wasn’t too long ago that Hadoop was a shiny new technology — familiar to large web companies but foreign (and fascinating) to everyone else. Things changed fast and Hadoop is now a billion-dollar IT market underpinning big data efforts by companies of all stripes. Mike Olson, co-founder and chief strategy officer (and former CEO) of Cloudera, came on the Structure Show podcast this week to tell us where Hadoop is now and where it’s headed.
“If we had to identify the single defining characteristic of the [Hadoop] market this year and going forward, it’s that shift in the competitive dynamic,” Olson explained. “It’s no longer a band of hearty, wild-eyed visionaries, venture-backed companies battling for market share with one another, but really the entrance of large and well-capitalized companies with very large installed bases and very good field relations with those guys who are going to shape how we — Cloudera — does business and really are going to shape how the market develops over the coming seven years.”
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.