We certainly do live in exciting times. Do you recall my column about the JOYCE Project to Equip Machines with Human-Like Perception, where JOYCE herself is a humanoid robot?

In that column, I mentioned that one very clever aspect of all this is the way in which JOYCE — which is the brainchild of those clever guys and gals at Immervision — employs the latest and greatest in data-in-picture technology, in which meta-information is embedded directly into the pixels forming the images. The idea is that each of JOYCE’s video frames can be enriched with data from a wide array of sensors providing contextual information that can be used by things like artificial intelligence.

At the time I wrote that column, the sort of sensor data I was envisaging embedding was along the lines of audio, (from microphones), motion and orientation (from accelerometers, gyroscopes, and magnetometers), environmental (from temperature, humidity, and barometric pressure sensors), and so forth. Well, as I wrote in Securing Artificial Intelligence Before It Secures Us! — as of this week — we can add the sense of smell to this list, because SmartNanotubes Technologies has just launched a nifty new nanotube-based olfactory sensor (see The Electronic Nose Knows!).

As fate would have it, I’ve been chatting to the folks at Immervision who have introduced me to all sorts of cunning concepts that have boggled my brain. For example, I knew that traditional camera lenses are spherical in nature. I even knew that the lens assemblies in modern cameras are actually composed of stacks of sub-lenses to correct for things like chromatic aberration.

What I didn’t know was that the latter part of the 20th century saw the deployment of aspherical lenses with one degree of freedom, which essentially means that the horizontal and vertical axes of the lens can be treated differently. Even more exciting is the fact that cutting-edge camera vision system companies like Immervision now have the capability to create lens assemblies using state-of-the-art freeform technologies that support unlimited degrees of freedom, thereby allowing designers to reshape the image footprint to provide optimal sensor coverage, asymmetric pixel density, and more “usable” pixels containing better data.

But wait, there’s more, because Immervision’s wide-angle freeform Panomorph lenses outperform all competing technologies, allowing conventional fisheye lenses to be replaced with a technology that enables an ultra-wide-angle field of view, augmented resolution, flawless viewing, and miniaturization of optics. Take the image below, for example:

 

Traditional wide-angle lens vs. freeform Panomorph lens (Click image to see a larger version — Image source: Immervision)

For historical reasons, image sensors typically follow 4:3 or 16:9 aspect ratios. In the case of a traditional wide-angle lens (left), this results in a lot of unused pixels on the sensor. By comparison, a freeform Panomorph provides around 33% more usable pixels, which equates to 33% more usable data.

Now, when you look at the image above, you might be thinking, “All they are doing is varying the horizontal and vertical axes of the lens,” but there’s so much more, because the folks at Immervision can manipulate every part of the lens. Consider the image below, for example. In this case, the Panomorph wide-angle lens has been tailored to provide twice the number of usable pixels at the edge of the image:

 

Traditional wide-angle lens vs. freeform Panomorph lens (Click image to see a larger version — Image source: Immervision)

What does this mean in the real-world? Well, look at the image below, which shows the results after the images have been de-warped in software:

Traditional wide-angle lens vs. freeform Panomorph lens (Click image to see a larger version — Image source: Immervision)

As we see, the images generated from the traditional wide-angle lens are significantly more pixelated and considerably less appealing as compared to the images generated from the Panomorph wide-angle lens. And it’s not just a matter of the images being more aesthetically appealing to the human eye because higher quality images allow artificial intelligence (AI) and machine vision systems to better perform tasks like object detection and recognition.

 

It’s More than Technology — You Need Expertise!

In addition to conventional RGB imaging, anamorphic freeform lenses are also applicable to wide-angle LIDAR. However, technology on its own is not sufficient — a high degree of expertise is also necessary. Unfortunately, such expertise is, oftentimes, sadly lacking. In particular, the LIDAR industry is not yet mature. As a result, many companies front-end their LIDAR systems with conventional optics, resulting in sub-optimal performance due to lack of usable pixels.

Many designers consider freeform camera lens systems to be a new trend. In reality, the world-leading team of multidisciplinary scientists, optical designers, and image processing engineers in Immervision’s InnovationLab have been creating freeform designs for more than fifteen years, as part of which they’ve developed a sophisticated suite of design, evaluation, simulation, and verification hardware and software tools.

 

What? You Want to Learn More?

Well, this is your lucky day, because the Automotive Sensors & Electronics: LIDAR, Radar, and Camera Online Summit will take place 17-18 February 2021. At this virtual event, the world’s experts in this field will focus on the new generation of automotive sensing systems — including camera vision and flash LIDAR — that will dramatically improve safety, performance, and the driving experience.

Even better, on Thursday, 18 February 2021 at 10:30 am EST (4:30 pm CET), Patrice Roulet, Vice President of Technology and Co-Founder of Immervision, will be giving his presentation, The Future of Automotive Sensing, in which he will cover everything I’ve touched on here, along with much, much more.

 

Don’t Wait — Register Now!

Register Now to attend the online summit and watch Patrice’s presentation to learn how anamorphic freeform lenses can be used with both conventional RGB imaging and wide-angle LIDAR systems to provide better vision, better decisions, and better outcomes.