pixels and pictures
18.5K views | +0 today
Follow
pixels and pictures
Exploring the digital imaging chain from sensors to brains
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
May 17, 2018 5:44 PM
Scoop.it!

Verizon announces RED HYDROGEN ONE, the world’s first holographic smartphone

Verizon announces RED HYDROGEN ONE, the world’s first holographic smartphone | pixels and pictures | Scoop.it

RED HYDROGEN ONE is a revolutionary new smartphone that will change the way you capture video and immerse yourself in mobile entertainment.

 

HYDROGEN ONE is the world’s first holographic phone with 4-View technology; it’s what 3D always wished it was, and without the glasses. Pair HYDROGEN ONE with Verizon’s best streaming network**, according to Nielsen, and Verizon’s unlimited plan and take full advantage of RED’s groundbreaking holographic media display and multidimensional surround sound for the ultimate streaming experience. The 5.7-inch holographic media machine features an industrial yet polished design with a powerful pogo pin system that allows you to add stackable modules to your phone for added functionality. It’s a revolutionary phone designed for digital creativity, from cinema to music to photography to art.

 

“RED HYDROGEN ONE was designed with cutting-edge technology that simply can't be described - you have to hold it in your hands and experience it yourself to understand why this is such a mobile game changer,” said Brian Higgins, vice president, device and consumer product marketing, Verizon. “A phone like this deserves the best network in the country, which is why we can’t wait to bring it to Verizon customers later this year.”

Philippe J DEWOST's insight:

Does anybody understand what this announcement really is about ?!? Because "cutting-edge technology that simply can't be described" does not really mean anything but clickbait marketing...

No comment yet.
Scooped by Philippe J DEWOST
April 5, 2018 1:53 AM
Scoop.it!

Mind-bending new screen technology uses ‘magic pixel’ to display different content to multiple people

Mind-bending new screen technology uses ‘magic pixel’ to display different content to multiple people | pixels and pictures | Scoop.it

Walking through the airport, you look up at the big screen to find your gate. But this is no ordinary public display. Rather than a list of arrivals and departures, you see just your flight information, large enough to view from a distance. Other travelers who look at the same screen, at the same time, see their flight information instead.

 

On the road, traffic signals are targeted individually to your car and other vehicles as they move — showing you a red light when you won’t make the intersection in time, and displaying a green light to another driver who can make it through safely.

At a stadium, the scoreboard displays stats for your favorite players. Fans nearby, each looking simultaneously at the same screen, instead see their favorites and other content customized to them.

 

These are examples of the long-term potential for “parallel reality” display technology to personalize the world, as envisioned by Misapplied Sciences Inc., a Redmond, Wash.-based startup founded by a small team of Microsoft and Walt Disney Imagineering veterans.

 

It might sound implausible, but the company has already developed the underlying technology that will make these scenarios possible. It’s a new type of display, enabled by a “multi-view” pixel. Unlike traditional pixels, each of which emit one color of light in all directions, Misapplied Sciences says its pixel can send different colors of light in tens of thousands or even millions of directions.

Philippe J DEWOST's insight:

This is a true "Magic Leap" !

I had not "seen" such impressive technology in the imaging field across these past 10 years, and do strongly recommend that you watch the videos carefully to start getting a clue of what it would actually mean to see lots of different targeted "realities" through the same, unique window...

(Plus I love the company "Misapplied Sciences" name as well as their logo — reminds me our eye-fidelity™ brand at imsense)

No comment yet.
Scooped by Philippe J DEWOST
March 28, 2018 12:54 PM
Scoop.it!

Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data - The New York Times

Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data - The New York Times | pixels and pictures | Scoop.it

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon.

 

It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

 

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

 

 

[Read more about the Cambridge Analytica whistle-blower contending that data-mining swung the Brexit referendum.]

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

Philippe J DEWOST's insight:

It is really starting to suck. Big Time. Palantir state customers should start to think again IMHO.

Philippe J DEWOST's curator insight, March 28, 2018 12:55 PM

It is really starting to suck. Big Time. Palantir state customers should start to think again IMHO.

Scooped by Philippe J DEWOST
March 22, 2018 4:54 AM
Scoop.it!

Google may be buying Lytro's assets for about $40M

Google may be buying Lytro's assets for about $40M | pixels and pictures | Scoop.it

Multiple sources tell us that Google is acquiring Lytro, the imaging startup that began as a ground-breaking camera company for consumers before pivoting to use its depth-data, light-field technology in VR.

Emails to several investors in Lytro have received either no response, or no comment. Multiple emails to Google and Lytro also have had no response.

But we have heard from several others connected either to the deal or the companies.

One source described the deal as an “asset sale” with Lytro going for no more than $40 million. Another source said the price was even lower: $25 million and that it was shopped around — to Facebook, according to one source; and possibly to Apple, according to another. A separate person told us that not all employees are coming over with the company’s technology: some have already received severance and parted ways with the company, and others have simply left.

Assets would presumably also include Lytro’s 59 patents related to light-field and other digital imaging technology.

The sale would be far from a big win for Lytro and its backers. The startup has raised just over $200 million in funding and was valued at around $360 million after its last round in 2017, according to data from PitchBook. Its long list of investors include Andreessen Horowitz, Foxconn, GSV, Greylock, NEA, Qualcomm Ventures and many more. Rick Osterloh, SVP of hardware at Google, sits on Lytro’s board.

A pricetag of $40 million is not quite the exit that was envisioned for the company when it first launched its camera concept, and in the words of investor Ben Horowitz, “blew my brains to bits.”

Philippe J DEWOST's insight:

Approx $ 680k per patent : this is the end of 12 years old Lytro's story. After $200M funding and several pivots, assets and IP are rumored to join Google.

Remember some key steps  from Light Field Camera (http://sco.lt/8Ga7fN) to DSLR (http://sco.lt/9GGCEz) to 360° video tools (http://sco.lt/5tecr3)

No comment yet.
Scooped by Philippe J DEWOST
November 7, 2017 12:52 AM
Scoop.it!

Here is where everybody can test neural network image 4x super-resolution and enhancement

Here is where everybody can test neural network image 4x super-resolution and enhancement | pixels and pictures | Scoop.it

Free online image upscale and JPEG artifact removal using netural networks. Image superresolution and image enhancement online.

Philippe J DEWOST's insight:

Impressive as I could judge with a few samples. Tell me how it works for you.

No comment yet.
Scooped by Philippe J DEWOST
October 16, 2017 3:39 AM
Scoop.it!

Magic Leap raises big new funding round : still no product

Magic Leap raises big new funding round : still no product | pixels and pictures | Scoop.it

Augmented reality startup Magic Leap is raising upwards of $1 billion in new venture capital funding, according to a Delaware regulatory filing unearthed by CB Insights. It would be Series D stock sold at $27 per share, which is a 17.2% bump from Series C shares issued in the summer of 2016.

 

Bottom line: Magic Leap still hasn't come out with a commercial product, having repeatedly missed expected release dates. But investors must still like what they see down in Ft. Lauderdale, given that they keep plugging in more money at increased valuations.

 

Digging in: Multiple sources tell Axios that the deal is closed, although we do not know exactly how much was raised. The Delaware filing is only a stock authorization, which means the Florida-based company may actually raise less. Bloomberg had reported last month that Magic Leap was raising $500 million at around a $5.5 billion pre-money valuation, with new investors expected to include Singapore's Temasek Holdings. One source suggests the final numbers should be close to what Bloomberg reported.

Philippe J DEWOST's insight:

This amazing company is not located in Silicon Valley, has not released any product in its first 7 years of existence, yet it just raised north of $1Bn in a series D round.

Whatever the outcome, it promises now to be impressive...

No comment yet.
Scooped by Philippe J DEWOST
September 15, 2017 4:52 AM
Scoop.it!

L'histoire de Paris par ses plans

L'histoire de Paris par ses plans | pixels and pictures | Scoop.it
Il y a sur cette page plus de 60 plans de Paris de toutes les époques et en très haute résolution retraçant toute l’histoire de la ville de sa création à nos jours avec toutes ses évolutions. Pour alléger la page il faut parfois cliquer dessus pour obtenir l’image dans sa meilleure qualité.
No comment yet.
Scooped by Philippe J DEWOST
September 13, 2017 1:22 PM
Scoop.it!

The "Monkey Selfie" Case Has Been Settled — This Is How It Broke Ground for Animal Rights

The "Monkey Selfie" Case Has Been Settled — This Is How It Broke Ground for Animal Rights | pixels and pictures | Scoop.it

After roughly two years of court battles, the groundbreaking lawsuit asking a U.S. federal court to declare Naruto—a free-living crested macaque—the copyright owner of the internationally famous “monkey selfie” photographs has been settled.

 

PETA; photographer David Slater; his company, Wildlife Personalities, Ltd.; and self-publishing platform Blurb, Inc., have reached a settlement of the “monkey selfie” litigation. As a part of the arrangement, Slater has agreed to donate 25 percent of any future revenue derived from using or selling the monkey selfies to charities that protect the habitat of Naruto and other crested macaques in Indonesia.

Philippe J DEWOST's insight:

The picture was taken in 2011 by "Naruto," a wild macaque that pressed the shutter on Slater’s camera. Did the primate own the rights to the picture ? In a tentative opinion issued last year, a district judge said there was "no indication" that the U.S. Copyright Act extended to animals. Slater, meanwhile, argued that the UK copyright license he obtained for the picture should be valid worldwide.

Sadly, we will never know what "Naruto" thinks of human copyright laws...

No comment yet.
Scooped by Philippe J DEWOST
August 25, 2017 6:38 AM
Scoop.it!

Photographer Shoots Formula 1 With 104-Year-Old Camera, And Here’s The Result!

Photographer Shoots Formula 1 With 104-Year-Old Camera, And Here’s The Result! | pixels and pictures | Scoop.it
If ever there was a sport that required rapid fire photography, Formula One racing is it. Which makes what photographer Joshua Paul does even more fascinating, because instead of using top-of-the-range cameras to capture the fast-paced sport, Paul chooses to take his shots using a 104-year-old Graflex 4×5 view camera.The photographer clearly has an incredible eye for detail, because unlike modern cameras that can take as many as 20 frames per second, his 1913 Graflex can only take 20 pictures in total. Because of this, every shot he takes has to be carefully thought about first, and this is clearly evident in this beautiful series of photographs.
Philippe J DEWOST's insight:
Some centennial imaging hardware is still in the game when put in proper hands...
No comment yet.
Scooped by Philippe J DEWOST
March 13, 2017 3:56 PM
Scoop.it!

This photo of some strawberries with no red pixels is the new 'the dress'

This photo of some strawberries with no red pixels is the new 'the dress' | pixels and pictures | Scoop.it

Remember internet kerfuffle that was 'the dress' ? Well, there's another optical illusion that's puzzling the internet right now. Behold: the red strawberries that aren't really red. Or more specifically, the image of the strawberries contains no 'red pixels.'

The important distinction to make here is that there is red information in the image (and, crucially, the relationships between colors are preserved). But despite what your eyes might be telling you, there are no pixels that appear at either end of the 'H' axis of the HSV color model. i.e. there is no pixel that, in isolation, would be considered to be red, hence: no 'red pixels' in the image.

 

So it's not that your brain is being tricked into inventing the red information, it's that your brain knows how much emphasis to give this red information, so that colors that it would see as cyan or grey in other contexts are interpreted as red here.

 

As was the case with 'the dress,' it all relates to a concept called color constancy, which relates to the human brain's ability to perceive objects as the same color under different lighting (though in this case there are unambiguous visual cues to what the 'correct' answer is).

 

Philippe J DEWOST's insight:

No "red pixel" has ever been added to this image and they are not "invented" by your brain either. Funny enough, our human white balancing capabilities go way beyond cameras "auto white balance" mode...

No comment yet.
Rescooped by Philippe J DEWOST from cross pond high tech
March 13, 2017 11:47 AM
Scoop.it!

Self-Driving deal ? Intel lines up $15B Mobileye acquisition

Self-Driving deal ? Intel lines up $15B Mobileye acquisition | pixels and pictures | Scoop.it

Intel inked a deal to acquire Mobileye, which the chipmaker’s chief Brian Krzanich said enables it to “accelerate the future of autonomous driving with improved performance in a cloud-to-car solution at a lower cost for automakers”.

 

Mobileye offers technology covering computer vision and machine learning, data analysis, localisation and mapping for advanced driver assistance systems and autonomous driving. The deal is said to fit with Intel’s strategy to “invest in data-intensive market opportunities that build on the company’s strengths in computing and connectivity from the cloud, through the network, to the device”.

 

A combined Intel and Mobileye automated driving unit will be based in Israel and headed by Amnon Shashua, co-founder, chairman and CTO of the acquired company. This, Intel said, “will support both companies’ existing production programmes and build upon relationships with car makers, suppliers and semiconductor partners to develop advanced driving assist, highly-autonomous and fully autonomous driving programmes”.

 

 

Philippe J DEWOST's insight:

Intel is going mobile (again), and this time in the car.

The target (the Israeli Mobileye) was a former Tesla partner until September 2016 when they broke up "ugly".

In terms of volumes, according to Statista, some 77.73 million automobiles are expected to be sold by 2017 and global car sales are expected to exceed 100 million units by 2020 : depending on the growth of the autonomous vehicle segment, it will still be a fraction of the (lost) smartphone market, even if the price points are expected to be somewhat different...

The bet and race are now between vertical integration and layering the market. Any clue who might win ?

Philippe J DEWOST's curator insight, March 13, 2017 11:45 AM

Intel is going mobile (again), and this time in the car.

The target (the Israeli Mobileye) was a former Tesla partner until September 2016 when they broke up "ugly".

In terms of volumes, according to Statista, some 77.73 million automobiles are expected to be sold by 2017 and global car sales are expected to exceed 100 million units by 2020 : depending on the growth of the autonomous vehicle segment, it will still be a fraction of the (lost) smartphone market, even if the price points are expected to be somewhat different...

The bet and race are now between vertical integration and layering the market. Any clue who might win ?

 

Scooped by Philippe J DEWOST
February 21, 2017 4:10 AM
Scoop.it!

Man points camera at ice – seconds later he captures the impossible on film

Man points camera at ice – seconds later he captures the impossible on film | pixels and pictures | Scoop.it
Watch an ice piece the size of Manhattan fall into the see.
Philippe J DEWOST's insight:
Watch an ice piece the size of Manhattan fall into the see and draw your conclusions about icecap melting and global warming...
No comment yet.
Scooped by Philippe J DEWOST
January 16, 2017 10:48 AM
Scoop.it!

Google RAISR uses machine learning for smarter upsampling

Upsampling techniques to create larger versions of low-resolution images have been around for a long time – at least as long as TV detectives have been asking computers to 'enhance' images. Common linear methods fill in new pixels using simple and fixed combinations of nearby existing pixel values, but fail to increase image detail. The engineers at Google's research lab have now created a new way of upsampling images that achieves noticeably better results than the previously existing methods.

RAISR (Rapid and Accurate Image Super-Resolution) uses machine learning to train an algorithm using pairs of images, one low-resolution, the other with a high pixel count. RAISR creates filters that can recreate image detail that is comparable to the original, when applied to each pixel of a low-resolution image. Filters are trained according to edge features that are found in specific small areas of images, including edge direction, edge strength and how directional the egde is. The training process with a database of 10000 image pairs takes approximately an hour. 

Philippe J DEWOST's insight:

Google introduces RAISR : when AI meets pixels to reduce bandwidth while improving visual perception.

No comment yet.
Scooped by Philippe J DEWOST
April 24, 2018 11:46 PM
Scoop.it!

Epic Games shows off amazing real-time digital human with Siren demo

Epic Games shows off amazing real-time digital human with Siren demo | pixels and pictures | Scoop.it

Epic Games, CubicMotion, 3Lateral, Tencent, and Vicon took a big step toward creating believable digital humans today with the debut of Siren, a demo of a woman rendered in real-time using Epic’s Unreal Engine 4 technology.

The move is a step toward transforming both films and games using digital humans who look and act like the real thing. The tech, shown off at Epic’s event at the Game Developers Conference in San Francisco, is available for licensing for game or film makers.

Cubic Motion’s computer vision technology empowered producers to conveniently and instantaneously create digital facial animation, saving the time and cost of digitally animating it by hand.

“Everything you saw was running in the Unreal Engine at 60 frames per second,” said Epic Games chief technology officer Kim Libreri, during a press briefing on Wednesday morning at GDC. “Creating believable digital characters that you can interact with and direct in real-time is one of the most exciting things that has happened in the computer graphics industry in recent years.”

Philippe J DEWOST's insight:

This is Unreal : absolute must see video. Amazing achievement when you realize that rendering happens in real-time at 60 frames per second.

No comment yet.
Scooped by Philippe J DEWOST
April 4, 2018 9:34 AM
Scoop.it!

H.265 HEVC vs VP9/AV1 : a snapshot on the video codec landscape

H.265 HEVC vs VP9/AV1 : a snapshot on the video codec landscape | pixels and pictures | Scoop.it

With the premium segment - led by Apple - now supporting H.265/HEVC, it is time content distributors leverage the massive user experience advantages of next generation compression (H.264/AVC was ratified back in 2003). Using ABR on congested networks an H.265/HEVC or VP9 stream can deliver HD whereas an H.264/AVC stream would be limited to SD. Of course this also saves bandwidth/CDN and storage costs.

 

The mass market segment lead by Google has decided not to support H.265/HEVC, but instead supports VP9. Despite lots of propaganda, VP9 can performs almost as well as H.265/HEVC (unlike most companies, we have built both encoders). So, post the 2003 H.264/AVC codec, both codecs will be required. Due to commercial and political reasons, both camps will not align around one next generation codec. In fact on a low cost Android phone priced under $100, it is impossble for the OEM to enable H.265/HEVC and have to pay royalties, since this would remove most of their profits. They will only enable VP9.

Philippe J DEWOST's insight:

Both shall prevail : this is the very insightful conclusion of a battlefield expert on the ongoing video codec war, with clear and interesting datapoints.

No comment yet.
Scooped by Philippe J DEWOST
March 23, 2018 10:15 AM
Scoop.it!

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars

Depth-sensing imaging system can peer through fog: Computational photography could solve a problem that bedevils self-driving cars | pixels and pictures | Scoop.it

An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light, which are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers. So, the MIT system could be a crucial step toward self-driving cars.

 

The researchers tested the system using a small tank of water with the vibrating motor from a humidifier immersed in it. In fog so dense that human vision could penetrate only 36 centimeters, the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.

 

Fifty-seven centimeters is not a great distance, but the fog produced for the study is far denser than any that a human driver would have to contend with; in the real world, a typical fog might afford a visibility of about 30 to 50 meters. The vital point is that the system performed better than human vision, whereas most imaging systems perform far worse. A navigation system that was even as good as a human driver at driving in fog would be a huge breakthrough.

 

"I decided to take on the challenge of developing a system that can see through actual fog," says Guy Satat, a graduate student in the MIT Media Lab, who led the research. "We're dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios."

 

Satat and his colleagues describe their system in a paper they'll present at the International Conference on Computational Photography in May. Satat is first author on the paper, and he's joined by his thesis advisor, associate professor of media arts and sciences Ramesh Raskar, and by Matthew Tancik, who was a graduate student in electrical engineering and computer science when the work was done.

Philippe J DEWOST's insight:

Ramesh Raskar in the mist ?

No comment yet.
Scooped by Philippe J DEWOST
December 6, 2017 12:13 AM
Scoop.it!

Dustin Farrell’s Storm Chasing delivers stunning lightning shots at 1000 frames per second

Dustin Farrell’s Storm Chasing delivers stunning lightning shots at 1000 frames per second | pixels and pictures | Scoop.it
Our latest passion project is now live. “Transient” is a compilation of Phantom Flex 4K slow motion and medium format timelapse. Possibly the largest collection of 4K 1000fps lightning footage in the world is now on display in an action-packed 3 minute short. During the Summer of 2017 we spent 30 days chasing storms and …
Philippe J DEWOST's insight:
How many young photographers have dreamt of capturing light in its transience ? How many finally captured one ? Here is a trove and we can see them develop in slo-mo as the initial shooting involves a 4K camera taking 1000 frames per second.
No comment yet.
Scooped by Philippe J DEWOST
October 19, 2017 2:46 AM
Scoop.it!

Google's first mobile chip is an image processor hidden in the Pixel 2

Google's first mobile chip is an image processor hidden in the Pixel 2 | pixels and pictures | Scoop.it

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products. You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case? Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market. Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.

The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.” It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready. In that way, it’s a rather delightful bonus for new Pixel buyers. The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Philippe J DEWOST's insight:

Google"s Pixel Visual Core and its 8 Image Processing Units unveil a counterintuitive hardware approach to High Dynamic Range processing until you understand the design principles of their HDR approach. #HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
September 19, 2017 12:02 AM
Scoop.it!

Shasta Ventures :: Home

Shasta Ventures :: Home | pixels and pictures | Scoop.it
Shasta Ventures just announced Camera Fund, a seed fund dedicated entirely to very early investments in companies working on VR, AR, and computer vision.
No comment yet.
Scooped by Philippe J DEWOST
September 15, 2017 2:01 AM
Scoop.it!

Cassini’s Mission to Saturn in 100 Images

Cassini’s Mission to Saturn in 100 Images | pixels and pictures | Scoop.it
Cassini arrived at Saturn in 2004, after a seven-year voyage. It was the first spacecraft to orbit the ringed planet.
 
Like Earth, Saturn has a tilted axis. Cassini arrived in the depths of northern winter, with Saturn’s rings tipped up and its north pole in darkness.
 
Cassini used infrared to view the hexagonal jet stream swirling around Saturn’s north pole, a six-sided vortex capped with a shimmering aurora.
 
As spring approached and sunlight returned to Saturn’s north pole, Cassini studied the polar hexagon and the dark hurricane at its center.
Each season on Saturn lasts about seven Earth years. Cassini watched as Saturn’s rings slowly tipped downward, casting narrower and narrower shadows.
The shadows grew narrower until the spring equinox, when Saturn’s rings and equator were flat to the sun.
 
The change in seasons brought a huge storm that wrapped around Saturn’s northern hemisphere. Cassini detected lightning deep within the planet.
 
Mission scientists were particularly interested in Titan, Saturn’s largest moon — a hazy ball larger than the planet Mercury.
Cassini’s cameras were able to pierce Titan’s smoggy nitrogen atmosphere, revealing sunlight glinting on frigid lakes of liquid methane and other hydrocarbons.
Cassini released the Huygens probe to parachute through Titan’s atmosphere. As it descended, the probe recorded rivers and deltas carved by methane rain.
Cassini returned to Titan over 100 times, using the large moon’s gravity to gradually shift the spacecraft’s orbit around Saturn.


Cassini used Titan’s gravity to tour Saturn’s rings, climbing high above the ring plane and threading gaps between the rings.

After 22 passes inside the rings, Cassini will plow into Saturn’s rippled clouds on Friday. The spacecraft will incinerate itself to prevent any future contamination of the moons Enceladus or Titan.

Philippe J DEWOST's insight:

Sublime Nature magnified by Human Science and Technology. These images compiled by the New York Times are worth more than a glance.

No comment yet.
Scooped by Philippe J DEWOST
August 27, 2017 2:55 AM
Scoop.it!

10+ Of The Best Shots Of The 2017 Solar Eclipse

10+ Of The Best Shots Of The 2017 Solar Eclipse | pixels and pictures | Scoop.it
On Monday, August 21, 2017, millions of people were staring at the same thing through doofy glasses, as they tried to catch a glimpse of the solar eclipse.
No comment yet.
Scooped by Philippe J DEWOST
August 10, 2017 9:35 AM
Scoop.it!

New algorithm lets photographers change the depth of images virtually

New algorithm lets photographers change the depth of images virtually | pixels and pictures | Scoop.it

Researchers have unveiled a new photography technique called computational zoom that allows photographers to manipulate the composition of their images after they've been taken, and to create what are described as "physically unattainable" photos. The researchers from the University of California, Santa Barbara and tech company Nvidia have detailed the findings in a paper, as spotted by DPReview.

 

In order to achieve computational zoom, photographers have to take a stack of photos that retain the same focal length, but with the camera edging slightly closer and closer to the subject. An algorithm and the computational zoom system then spit out a 3D rendering of the scene with multiple views based on the photo stack. All of that information is then “used to synthesize multi-perspective images which have novel compositions through a user interface” — meaning photographers can then manipulate and change a photo’s composition using the software in real time.

 

 

 

The researchers say the multi-perspective camera model can generate compositions that are not physically attainable, and can extend a photographer’s control over factors such as the relative size of objects at different depths and the sense of depth of the picture. So the final image isn’t technically one photo, but an amalgamation of many. The team hopes to make the technology available to photographers in the form of software plug-ins, reports DPReview.

Philippe J DEWOST's insight:

Will software become more successful than lightfield cameras ?

No comment yet.
Scooped by Philippe J DEWOST
March 13, 2017 2:10 PM
Scoop.it!

Impressive 2016 CIPA data shows compact digital camera sales lower than ever and crushed by smartphones

Impressive 2016 CIPA data shows compact digital camera sales lower than ever and crushed by smartphones | pixels and pictures | Scoop.it

Last month, the Camera & Imaging Products Association (CIPA) released its 2016 report detailing yearly trends in camera shipments. Using that data, photographer Sven Skafisk has created a graph that makes it easy to visualize the data, namely the major growth in smartphone sales over the past few years and the apparent impact it has had on dedicated camera sales.

The chart shows smartphone sales achieving a big spike around 2010, the same time range in which dedicated camera sales reached its peak. Each following year has represented substantial growth in smartphone sales and significant decreases in dedicated camera sales, particularly in the compact digital cameras category. 

Per the CIPA report, total digital camera shipments last year fell by 31.7% over the previous year. The report cites multiple factors affecting digital camera sales overall, with smartphones proving the biggest factor affecting the sales of digital cameras with built-in lenses. The Association's 2017 outlook includes a forecast that compact digital cameras will see another 16.7-percent year-on-year sales decrease this year.

Skafisk's graph below shows the massive divide between smartphone sales and camera sales - be prepared to do some scrolling.

Philippe J DEWOST's insight:

Drowning by numbers. The full chart is huge but worth a glance.

No comment yet.
Scooped by Philippe J DEWOST
March 10, 2017 10:04 AM
Scoop.it!

The Hardest Mandelbrot Zoom in 2016 - New record, 750 000 000 iterations

2016 record of the iteration number reaching 750 000 000

This value took more than 10 gigabytes of ram to render the reference.

The min-iteration number of the last keyframe is 153 619 576.
Coordinates :
Re : -1.74995768370609350360221450607069970727110579726252077930242837820286008082972804887218672784431700831100544507655659531379747541999999995
Im : 0.00000000000000000278793706563379402178294753790944364927085054500163081379043930650189386849765202169477470552201325772332454726999999995

Zoom: 3.18725072474E99

Realized with Kalles Fraktaler 2.6.2
Music : London Grammar - Strong (PureNRG Remix)

Philippe J DEWOST's insight:

720p, full screen mode, and no substances needed for this insane visual experience.

No comment yet.
Scooped by Philippe J DEWOST
February 17, 2017 9:07 AM
Scoop.it!

Lytro pursues 360-degree video and cinematic tools with $60M Series D

Lytro pursues 360-degree video and cinematic tools with $60M Series D | pixels and pictures | Scoop.it

Ever-shifting camera tech company Lytro has raised major cash to continue development and deployment of its cinema-level camera systems. Perhaps the company’s core technology, “light field photography” that captures rich depth data, will be put to better use there than it was in the ill-fated consumer offerings.

“We believe we have the opportunity to be the company that defines the production pipeline, technologies and quality standards for an entire next generation of content,” wrote CEO Jason Rosenthal in a blog post.

Just what constitutes that next generation is rather up in the air right now, but Lytro feels sure that 360-degree 3D video will be a major part of it. That’s the reason it created its Immerge capture system — and then totally re-engineered it from a spherical lens setup to a planar one. 

.../...

The $60M round was led by Blue Pool Capital, with participation from EDBI, Foxconn, Huayi Brothers and Barry Sternlicht. “We believe that Asia in general and China in particular represent hugely important markets for VR and cinematic content over the next five years,” Rosenthal said in a statement.

It’s a hell of a lot of money, more even than the $50M round the company raised to develop its original consumer camera — which flopped. Its Illum follow-up camera, aimed at more serious photographers, also flopped. Both were innovative technologically but expensive and their use cases questionable.

Philippe J DEWOST's insight:

Light Field camera design startup Lytro is still not dead after 5 years and several pivots as it gobbles $60M additional funding

No comment yet.