pixels and pictures
14.2K views | +0 today
Follow
pixels and pictures
Exploring the digital imaging chain from sensors to brains
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Moon over the Crest

Moon over the Crest | pixels and pictures | Scoop.it

Canon EOS60D + 70-200 f/4 L - Serre Chevalier Valley, French Alps

Philippe J DEWOST's insight:

Proud of this one

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Nvidia’s AI creates amazing slow motion video by ‘hallucinating’ missing frames

Nvidia’s AI creates amazing slow motion video by ‘hallucinating’ missing frames | pixels and pictures | Scoop.it

Nvidia’s researchers developed an AI that converts standard videos into incredibly smooth slow motion.

The broad strokes: Capturing high quality slow motion footage requires specialty equipment, plenty of storage, and setting your equipment to shoot in the proper mode ahead of time.

Slow motion video is typically shot at around 240 frames per second (fps) — that’s the number of individual images which comprise one second of video. The more fps you have, the better the image quality.

 

The impact: Anyone who has ever wished they could convert part of a regular video into a fluid slow motion clip can appreciate this.

If you’ve captured your footage in, for example, standard smartphone video format (30fps), trying to slow down the video will result in something choppy and hard to watch.

Nvidia’s AI can estimate what more frames would look like and create new ones to fill space. It can take any two existing sequential frames and hallucinate an arbitrary number of new frames to connect them, ensuring any motion between them is kept.

Philippe J DEWOST's insight:

AI is slowing down, and it is not what you think.

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AI can transfer human facial movements from one video to another, and the results look very realistic

AI can transfer human facial movements from one video to another, and the results look very realistic | pixels and pictures | Scoop.it

Researchers have taken another step towards realistic, synthesized video. The team, made up of scientists in Germany, France, the UK and the US, used AI to transfer the head poses, facial expressions, eye motions and blinks of a person in one video onto another entirely different person in a separate video. The researchers say it's the first time a method has transferred these types of movements between videos and the result is a series of clips that look incredibly realistic.

The neural network created by the researchers only needs a few minutes of the target video for training and it can then translate the head, facial and eye movements of the source to the target. It can even manipulate some background shadows when they're present. In the video below, you can see how the system does this using videos of Barack Obama, Vladimir Putin and Theresa May, among others, as examples -- mouth movements, head rotations and eyebrow wiggles are all transferred between videos. The researchers even use different portions of the same video as both the source and the target and the synthesized result is nearly indistinguishable from the original video.

Philippe J DEWOST's insight:

You need to check this Siggraph video if you want to seize the potential and the wide range of applications (including questionable ones)

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Verizon announces RED HYDROGEN ONE, the world’s first holographic smartphone

Verizon announces RED HYDROGEN ONE, the world’s first holographic smartphone | pixels and pictures | Scoop.it

RED HYDROGEN ONE is a revolutionary new smartphone that will change the way you capture video and immerse yourself in mobile entertainment.

 

HYDROGEN ONE is the world’s first holographic phone with 4-View technology; it’s what 3D always wished it was, and without the glasses. Pair HYDROGEN ONE with Verizon’s best streaming network**, according to Nielsen, and Verizon’s unlimited plan and take full advantage of RED’s groundbreaking holographic media display and multidimensional surround sound for the ultimate streaming experience. The 5.7-inch holographic media machine features an industrial yet polished design with a powerful pogo pin system that allows you to add stackable modules to your phone for added functionality. It’s a revolutionary phone designed for digital creativity, from cinema to music to photography to art.

 

“RED HYDROGEN ONE was designed with cutting-edge technology that simply can't be described - you have to hold it in your hands and experience it yourself to understand why this is such a mobile game changer,” said Brian Higgins, vice president, device and consumer product marketing, Verizon. “A phone like this deserves the best network in the country, which is why we can’t wait to bring it to Verizon customers later this year.”

Philippe J DEWOST's insight:

Does anybody understand what this announcement really is about ?!? Because "cutting-edge technology that simply can't be described" does not really mean anything but clickbait marketing...

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Mind-bending new screen technology uses ‘magic pixel’ to display different content to multiple people

Mind-bending new screen technology uses ‘magic pixel’ to display different content to multiple people | pixels and pictures | Scoop.it

Walking through the airport, you look up at the big screen to find your gate. But this is no ordinary public display. Rather than a list of arrivals and departures, you see just your flight information, large enough to view from a distance. Other travelers who look at the same screen, at the same time, see their flight information instead.

 

On the road, traffic signals are targeted individually to your car and other vehicles as they move — showing you a red light when you won’t make the intersection in time, and displaying a green light to another driver who can make it through safely.

At a stadium, the scoreboard displays stats for your favorite players. Fans nearby, each looking simultaneously at the same screen, instead see their favorites and other content customized to them.

 

These are examples of the long-term potential for “parallel reality” display technology to personalize the world, as envisioned by Misapplied Sciences Inc., a Redmond, Wash.-based startup founded by a small team of Microsoft and Walt Disney Imagineering veterans.

 

It might sound implausible, but the company has already developed the underlying technology that will make these scenarios possible. It’s a new type of display, enabled by a “multi-view” pixel. Unlike traditional pixels, each of which emit one color of light in all directions, Misapplied Sciences says its pixel can send different colors of light in tens of thousands or even millions of directions.

Philippe J DEWOST's insight:

This is a true "Magic Leap" !

I had not "seen" such impressive technology in the imaging field across these past 10 years, and do strongly recommend that you watch the videos carefully to start getting a clue of what it would actually mean to see lots of different targeted "realities" through the same, unique window...

(Plus I love the company "Misapplied Sciences" name as well as their logo — reminds me our eye-fidelity™ brand at imsense)

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data - The New York Times

Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data - The New York Times | pixels and pictures | Scoop.it

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

 

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

 

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

 

 

[Read more about the Cambridge Analytica whistle-blower contending that data-mining swung the Brexit referendum.]

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

Philippe J DEWOST's insight:

It is really starting to suck. Big Time. Palantir state customers should start to think again IMHO.

more...
Philippe J DEWOST's curator insight, March 28, 12:55 PM

It is really starting to suck. Big Time. Palantir state customers should start to think again IMHO.

Scooped by Philippe J DEWOST
Scoop.it!

Google may be buying Lytro's assets for about $40M

Google may be buying Lytro's assets for about $40M | pixels and pictures | Scoop.it

Multiple sources tell us that Google is acquiring Lytro, the imaging startup that began as a ground-breaking camera company for consumers before pivoting to use its depth-data, light-field technology in VR.

Emails to several investors in Lytro have received either no response, or no comment. Multiple emails to Google and Lytro also have had no response.

But we have heard from several others connected either to the deal or the companies.

One source described the deal as an “asset sale” with Lytro going for no more than $40 million. Another source said the price was even lower: $25 million and that it was shopped around — to Facebook, according to one source; and possibly to Apple, according to another. A separate person told us that not all employees are coming over with the company’s technology: some have already received severance and parted ways with the company, and others have simply left.

Assets would presumably also include Lytro’s 59 patents related to light-field and other digital imaging technology.

The sale would be far from a big win for Lytro and its backers. The startup has raised just over $200 million in funding and was valued at around $360 million after its last round in 2017, according to data from PitchBook. Its long list of investors include Andreessen Horowitz, Foxconn, GSV, Greylock, NEA, Qualcomm Ventures and many more. Rick Osterloh, SVP of hardware at Google, sits on Lytro’s board.

A pricetag of $40 million is not quite the exit that was envisioned for the company when it first launched its camera concept, and in the words of investor Ben Horowitz, “blew my brains to bits.”

Philippe J DEWOST's insight:

Approx $ 680k per patent : this is the end of 12 years old Lytro's story. After $200M funding and several pivots, assets and IP are rumored to join Google.

Remember some key steps  from Light Field Camera (http://sco.lt/8Ga7fN) to DSLR (http://sco.lt/9GGCEz) to 360° video tools (http://sco.lt/5tecr3)

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Here is where everybody can test neural network image 4x super-resolution and enhancement

Here is where everybody can test neural network image 4x super-resolution and enhancement | pixels and pictures | Scoop.it

Free online image upscale and JPEG artifact removal using netural networks. Image superresolution and image enhancement online.

Philippe J DEWOST's insight:

Impressive as I could judge with a few samples. Tell me how it works for you.

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Magic Leap raises big new funding round : still no product

Magic Leap raises big new funding round : still no product | pixels and pictures | Scoop.it

Augmented reality startup Magic Leap is raising upwards of $1 billion in new venture capital funding, according to a Delaware regulatory filing unearthed by CB Insights. It would be Series D stock sold at $27 per share, which is a 17.2% bump from Series C shares issued in the summer of 2016.

 

Bottom line: Magic Leap still hasn't come out with a commercial product, having repeatedly missed expected release dates. But investors must still like what they see down in Ft. Lauderdale, given that they keep plugging in more money at increased valuations.

 

Digging in: Multiple sources tell Axios that the deal is closed, although we do not know exactly how much was raised. The Delaware filing is only a stock authorization, which means the Florida-based company may actually raise less. Bloomberg had reported last month that Magic Leap was raising $500 million at around a $5.5 billion pre-money valuation, with new investors expected to include Singapore's Temasek Holdings. One source suggests the final numbers should be close to what Bloomberg reported.

Philippe J DEWOST's insight:

This amazing company is not located in Silicon Valley, has not released any product in its first 7 years of existence, yet it just raised north of $1Bn in a series D round.

Whatever the outcome, it promises now to be impressive...

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

L'histoire de Paris par ses plans

L'histoire de Paris par ses plans | pixels and pictures | Scoop.it
Il y a sur cette page plus de 60 plans de Paris de toutes les époques et en très haute résolution retraçant toute l’histoire de la ville de sa création à nos jours avec toutes ses évolutions. Pour alléger la page il faut parfois cliquer dessus pour obtenir l’image dans sa meilleure qualité.
more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

The "Monkey Selfie" Case Has Been Settled — This Is How It Broke Ground for Animal Rights

The "Monkey Selfie" Case Has Been Settled — This Is How It Broke Ground for Animal Rights | pixels and pictures | Scoop.it

After roughly two years of court battles, the groundbreaking lawsuit asking a U.S. federal court to declare Naruto—a free-living crested macaque—the copyright owner of the internationally famous “monkey selfie” photographs has been settled.

 

PETA; photographer David Slater; his company, Wildlife Personalities, Ltd.; and self-publishing platform Blurb, Inc., have reached a settlement of the “monkey selfie” litigation. As a part of the arrangement, Slater has agreed to donate 25 percent of any future revenue derived from using or selling the monkey selfies to charities that protect the habitat of Naruto and other crested macaques in Indonesia.

Philippe J DEWOST's insight:

The picture was taken in 2011 by "Naruto," a wild macaque that pressed the shutter on Slater’s camera. Did the primate own the rights to the picture ? In a tentative opinion issued last year, a district judge said there was "no indication" that the U.S. Copyright Act extended to animals. Slater, meanwhile, argued that the UK copyright license he obtained for the picture should be valid worldwide.

Sadly, we will never know what "Naruto" thinks of human copyright laws...

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Photographer Shoots Formula 1 With 104-Year-Old Camera, And Here’s The Result!

Photographer Shoots Formula 1 With 104-Year-Old Camera, And Here’s The Result! | pixels and pictures | Scoop.it
If ever there was a sport that required rapid fire photography, Formula One racing is it. Which makes what photographer Joshua Paul does even more fascinating, because instead of using top-of-the-range cameras to capture the fast-paced sport, Paul chooses to take his shots using a 104-year-old Graflex 4×5 view camera.The photographer clearly has an incredible eye for detail, because unlike modern cameras that can take as many as 20 frames per second, his 1913 Graflex can only take 20 pictures in total. Because of this, every shot he takes has to be carefully thought about first, and this is clearly evident in this beautiful series of photographs.
Philippe J DEWOST's insight:
Some centennial imaging hardware is still in the game when put in proper hands...
more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

This photo of some strawberries with no red pixels is the new 'the dress'

This photo of some strawberries with no red pixels is the new 'the dress' | pixels and pictures | Scoop.it

Remember internet kerfuffle that was 'the dress' ? Well, there's another optical illusion that's puzzling the internet right now. Behold: the red strawberries that aren't really red. Or more specifically, the image of the strawberries contains no 'red pixels.'

The important distinction to make here is that there is red information in the image (and, crucially, the relationships between colors are preserved). But despite what your eyes might be telling you, there are no pixels that appear at either end of the 'H' axis of the HSV color model. i.e. there is no pixel that, in isolation, would be considered to be red, hence: no 'red pixels' in the image.

 

So it's not that your brain is being tricked into inventing the red information, it's that your brain knows how much emphasis to give this red information, so that colors that it would see as cyan or grey in other contexts are interpreted as red here.

 

As was the case with 'the dress,' it all relates to a concept called color constancy, which relates to the human brain's ability to perceive objects as the same color under different lighting (though in this case there are unambiguous visual cues to what the 'correct' answer is).

 

Philippe J DEWOST's insight:

No "red pixel" has ever been added to this image and they are not "invented" by your brain either. Funny enough, our human white balancing capabilities go way beyond cameras "auto white balance" mode...

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

NASA gives Jupiter the Van Gogh treatment with magnificent new image

NASA gives Jupiter the Van Gogh treatment with magnificent new image | pixels and pictures | Scoop.it

Although originally slated to crash into Jupiter this month, Juno, NASA's Jovian explorer, has been given a three-year extension to gather all of NASAs planned scientific measurements, NASA announced earlier this month.

If it keeps producing images like this, showcasing Jupiter's writhing, stormy face, I really hope they never crash the Absolute Unit.

The picture was snapped on May 23 as Juno swung past the planet for a 13th time, only 9,600 miles from its "surface", the tangle of tumultuous clouds that mark its exterior. The bright white hues represent clouds that are likely made of a mix of ammonia and water, while the darker blue-green spirals represent cloud material "deeper in Jupiter's atmosphere."

The image was color-enhanced by two citizen scientists, Gerald Eichstädt and Seán Doran, to produce the image above. The rippling mess of storms marks Jupiter's face like a stunning oil painting, a Jovian Starry Night with billowing whites curling in on each other, like the folds of a human brain.

NASA draws attention to the "bright oval" in the bottom portion of the image, explaining how JunoCam -- the imager on the spacecraft -- reveals "fine-scale structure within this weather system, including additional structures within it."

It's not the first time that Jupiter's menace has been caught and colorized either, but this Earth-like image snapped back in March, shows a side of the gas giant that isn't all about swirling clouds and red spots.

All of Juno's images taken with the JunoCam imager are available to marvel at and process at the Juno Mission homepage.

Philippe J DEWOST's insight:

Jupiter’s vociferous weather systems snapped on a Juno fly-by reveal a stunning cosmic oil painting.

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

DeepMind’s AI can ‘imagine’ a world based on a single picture

DeepMind’s AI can ‘imagine’ a world based on a single picture | pixels and pictures | Scoop.it

Artificial intelligence can now put itself in someone else’s shoes. DeepMind has developed a neural network that taught itself to ‘imagine’ a scene from different viewpoints, based on just a single image.

Given a 2D picture of a scene – say, a room with a brick wall, and a brightly coloured sphere and cube on the floor – the neural network can generate a 3D view from a different vantage point, rendering the opposite sides of the objects and altering where shadows fall to maintain the same light source.

The system, called the Generative Query Network (GQN), can tease out details from the static images to guess at spatial relationships, including the camera’s position.

“Imagine you’re looking at Mt. Everest, and you move a metre – the mountain doesn’t change size, which tells you something about its distance from you,”says Ali Eslami who led the project at Deepmind.

“But if you look at a mug, it would change position. That’s similar to how this works,”

To train the neural network, he and his team showed it images of a scene from different viewpoints, which it used to predict what something would look like from behind or off to the side. The system also taught itself through context about textures, colours, and lighting. This is in contrast to the current technique of supervised learning, in which the details of a scene are manually labeled and fed to the AI.

Philippe J DEWOST's insight:

DeepMind now creates depth in images

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Rumors hint future iPhone X could have three cameras on the back

Rumors hint future iPhone X could have three cameras on the back | pixels and pictures | Scoop.it

Lots of random sources seem to think some upcoming iteration of the iPhone X will feature three rear-facing cameras, either this year or next. So here we are: I’m telling you about these rumors, although I’m not convinced this is going to happen.

The most recent report comes from The Korea Herald, which claims that both Samsung’s Galaxy S10 and a new iPhone X Plus will feature three camera lenses. It isn’t clear what the third sensor would do exactly for Apple. Huawei incorporated three cameras into its P20 Pro — a 40-megapixel main camera, a 20-megapixel monochrome camera, and an 8-megapixel telephoto camera. Most outlets seem to think Apple’s third lens would be used for an enhanced zoom.

Earlier rumors came from the Taiwanese publication Economic Daily News and an investors note seen by CNET. Both of those reports indicated a 2019 release date for the phone. All of these rumors seem underdeveloped at the moment and of course, even if Apple is testing a three-camera setup, the team could always change its mind and stick with the dual cameras. Still, if Apple wants to make an obvious hardware change to its phone cameras, a third lens would be one way to do it.

Philippe J DEWOST's insight:

Sounds like a long march towards Pelican Imaging ... How many total lenses shall we expect in a few years from now ?

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Epic Games shows off amazing real-time digital human with Siren demo

Epic Games shows off amazing real-time digital human with Siren demo | pixels and pictures | Scoop.it

Epic Games, CubicMotion, 3Lateral, Tencent, and Vicon took a big step toward creating believable digital humans today with the debut of Siren, a demo of a woman rendered in real-time using Epic’s Unreal Engine 4 technology.

The move is a step toward transforming both films and games using digital humans who look and act like the real thing. The tech, shown off at Epic’s event at the Game Developers Conference in San Francisco, is available for licensing for game or film makers.

Cubic Motion’s computer vision technology empowered producers to conveniently and instantaneously create digital facial animation, saving the time and cost of digitally animating it by hand.

“Everything you saw was running in the Unreal Engine at 60 frames per second,” said Epic Games chief technology officer Kim Libreri, during a press briefing on Wednesday morning at GDC. “Creating believable digital characters that you can interact with and direct in real-time is one of the most exciting things that has happened in the computer graphics industry in recent years.”

Philippe J DEWOST's insight:

This is Unreal : absolute must see video. Amazing achievement when you realize that rendering happens in real-time at 60 frames per second.

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

H.265 HEVC vs VP9/AV1 : a snapshot on the video codec landscape

H.265 HEVC vs VP9/AV1 : a snapshot on the video codec landscape | pixels and pictures | Scoop.it

With the premium segment - led by Apple - now supporting H.265/HEVC, it is time content distributors leverage the massive user experience advantages of next generation compression (H.264/AVC was ratified back in 2003). Using ABR on congested networks an H.265/HEVC or VP9 stream can deliver HD whereas an H.264/AVC stream would be limited to SD. Of course this also saves bandwidth/CDN and storage costs.

 

The mass market segment lead by Google has decided not to support H.265/HEVC, but instead supports VP9. Despite lots of propaganda, VP9 can performs almost as well as H.265/HEVC (unlike most companies, we have built both encoders). So, post the 2003 H.264/AVC codec, both codecs will be required. Due to commercial and political reasons, both camps will not align around one next generation codec. In fact on a low cost Android phone priced under $100, it is impossble for the OEM to enable H.265/HEVC and have to pay royalties, since this would remove most of their profits. They will only enable VP9.

Philippe J DEWOST's insight:

Both shall prevail : this is the very insightful conclusion of a battlefield expert on the ongoing video codec war, with clear and interesting datapoints.

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars | pixels and pictures | Scoop.it

An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light, which are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers. So, the MIT system could be a crucial step toward self-driving cars.

 

The researchers tested the system using a small tank of water with the vibrating motor from a humidifier immersed in it. In fog so dense that human vision could penetrate only 36 centimeters, the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.

 

Fifty-seven centimeters is not a great distance, but the fog produced for the study is far denser than any that a human driver would have to contend with; in the real world, a typical fog might afford a visibility of about 30 to 50 meters. The vital point is that the system performed better than human vision, whereas most imaging systems perform far worse. A navigation system that was even as good as a human driver at driving in fog would be a huge breakthrough.

 

"I decided to take on the challenge of developing a system that can see through actual fog," says Guy Satat, a graduate student in the MIT Media Lab, who led the research. "We're dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios."

 

Satat and his colleagues describe their system in a paper they'll present at the International Conference on Computational Photography in May. Satat is first author on the paper, and he's joined by his thesis advisor, associate professor of media arts and sciences Ramesh Raskar, and by Matthew Tancik, who was a graduate student in electrical engineering and computer science when the work was done.

Philippe J DEWOST's insight:

Ramesh Raskar in the mist ?

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Dustin Farrell’s Storm Chasing delivers stunning lightning shots at 1000 frames per second

Dustin Farrell’s Storm Chasing delivers stunning lightning shots at 1000 frames per second | pixels and pictures | Scoop.it
Our latest passion project is now live. “Transient” is a compilation of Phantom Flex 4K slow motion and medium format timelapse. Possibly the largest collection of 4K 1000fps lightning footage in the world is now on display in an action-packed 3 minute short. During the Summer of 2017 we spent 30 days chasing storms and …
Philippe J DEWOST's insight:
How many young photographers have dreamt of capturing light in its transience ? How many finally captured one ? Here is a trove and we can see them develop in slo-mo as the initial shooting involves a 4K camera taking 1000 frames per second.
more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google's first mobile chip is an image processor hidden in the Pixel 2

Google's first mobile chip is an image processor hidden in the Pixel 2 | pixels and pictures | Scoop.it

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products. You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case? Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market. Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.

The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.” It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready. In that way, it’s a rather delightful bonus for new Pixel buyers. The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Philippe J DEWOST's insight:

Google"s Pixel Visual Core and its 8 Image Processing Units unveil a counterintuitive hardware approach to High Dynamic Range processing until you understand the design principles of their HDR approach. #HardwareIsNotDead

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Shasta Ventures :: Home

Shasta Ventures :: Home | pixels and pictures | Scoop.it
Shasta Ventures just announced Camera Fund, a seed fund dedicated entirely to very early investments in companies working on VR, AR, and computer vision.
more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Cassini’s Mission to Saturn in 100 Images

Cassini’s Mission to Saturn in 100 Images | pixels and pictures | Scoop.it
Cassini arrived at Saturn in 2004, after a seven-year voyage. It was the first spacecraft to orbit the ringed planet.
 
Like Earth, Saturn has a tilted axis. Cassini arrived in the depths of northern winter, with Saturn’s rings tipped up and its north pole in darkness.
 
Cassini used infrared to view the hexagonal jet stream swirling around Saturn’s north pole, a six-sided vortex capped with a shimmering aurora.
 
As spring approached and sunlight returned to Saturn’s north pole, Cassini studied the polar hexagon and the dark hurricane at its center.
Each season on Saturn lasts about seven Earth years. Cassini watched as Saturn’s rings slowly tipped downward, casting narrower and narrower shadows.
The shadows grew narrower until the spring equinox, when Saturn’s rings and equator were flat to the sun.
 
The change in seasons brought a huge storm that wrapped around Saturn’s northern hemisphere. Cassini detected lightning deep within the planet.
 
Mission scientists were particularly interested in Titan, Saturn’s largest moon — a hazy ball larger than the planet Mercury.
Cassini’s cameras were able to pierce Titan’s smoggy nitrogen atmosphere, revealing sunlight glinting on frigid lakes of liquid methane and other hydrocarbons.
Cassini released the Huygens probe to parachute through Titan’s atmosphere. As it descended, the probe recorded rivers and deltas carved by methane rain.
Cassini returned to Titan over 100 times, using the large moon’s gravity to gradually shift the spacecraft’s orbit around Saturn.


Cassini used Titan’s gravity to tour Saturn’s rings, climbing high above the ring plane and threading gaps between the rings.

After 22 passes inside the rings, Cassini will plow into Saturn’s rippled clouds on Friday. The spacecraft will incinerate itself to prevent any future contamination of the moons Enceladus or Titan.

Philippe J DEWOST's insight:

Sublime Nature magnified by Human Science and Technology. These images compiled by the New York Times are worth more than a glance.

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

10+ Of The Best Shots Of The 2017 Solar Eclipse

10+ Of The Best Shots Of The 2017 Solar Eclipse | pixels and pictures | Scoop.it
On Monday, August 21, 2017, millions of people were staring at the same thing through doofy glasses, as they tried to catch a glimpse of the solar eclipse.
more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

New algorithm lets photographers change the depth of images virtually

New algorithm lets photographers change the depth of images virtually | pixels and pictures | Scoop.it

Researchers have unveiled a new photography technique called computational zoom that allows photographers to manipulate the composition of their images after they've been taken, and to create what are described as "physically unattainable" photos. The researchers from the University of California, Santa Barbara and tech company Nvidia have detailed the findings in a paper, as spotted by DPReview.

 

In order to achieve computational zoom, photographers have to take a stack of photos that retain the same focal length, but with the camera edging slightly closer and closer to the subject. An algorithm and the computational zoom system then spit out a 3D rendering of the scene with multiple views based on the photo stack. All of that information is then “used to synthesize multi-perspective images which have novel compositions through a user interface” — meaning photographers can then manipulate and change a photo’s composition using the software in real time.

 

 

 

The researchers say the multi-perspective camera model can generate compositions that are not physically attainable, and can extend a photographer’s control over factors such as the relative size of objects at different depths and the sense of depth of the picture. So the final image isn’t technically one photo, but an amalgamation of many. The team hopes to make the technology available to photographers in the form of software plug-ins, reports DPReview.

Philippe J DEWOST's insight:

Will software become more successful than lightfield cameras ?

more...
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Impressive 2016 CIPA data shows compact digital camera sales lower than ever and crushed by smartphones

Impressive 2016 CIPA data shows compact digital camera sales lower than ever and crushed by smartphones | pixels and pictures | Scoop.it

Last month, the Camera & Imaging Products Association (CIPA) released its 2016 report detailing yearly trends in camera shipments. Using that data, photographer Sven Skafisk has created a graph that makes it easy to visualize the data, namely the major growth in smartphone sales over the past few years and the apparent impact it has had on dedicated camera sales.

The chart shows smartphone sales achieving a big spike around 2010, the same time range in which dedicated camera sales reached its peak. Each following year has represented substantial growth in smartphone sales and significant decreases in dedicated camera sales, particularly in the compact digital cameras category. 

Per the CIPA report, total digital camera shipments last year fell by 31.7% over the previous year. The report cites multiple factors affecting digital camera sales overall, with smartphones proving the biggest factor affecting the sales of digital cameras with built-in lenses. The Association's 2017 outlook includes a forecast that compact digital cameras will see another 16.7-percent year-on-year sales decrease this year.

Skafisk's graph below shows the massive divide between smartphone sales and camera sales - be prepared to do some scrolling.

Philippe J DEWOST's insight:

Drowning by numbers. The full chart is huge but worth a glance.

more...
No comment yet.