AI’s crystal ball: Predicting future camera features in 2034

AI’s crystal ball: Predicting future camera features in 2034

What features will cameras have ten years from now that cameras do not have now? I asked Gemini AI. Here are the answers.

AI’s Crystal Ball – Predicting future camera features in 2034. Image by Justin Clark from Unsplash.

I don’t know that AI — our new buzzword for what is largely machine learning — has a crystal ball. But Google’s Gemini is pretty good at scraping the internet for information and ideas. I asked what we might see in 10 years.

Actually, what I did was ask it several times. Then I combined some of the more interesting answers into this article. Gemini divided its answers into similar categories each time it answered. I kept its categories and answers verbatim, all as headers. Then I added additional explanations or thoughts below the headers.

The future looks bright. Image by Elena Koycheva from Unsplash.
The future looks bright. Image by Elena Koycheva from Unsplash.

Sensor advancements

Camera sensors of the future. Photograph by Alexander Andrews from Unsplash.
Photograph by Alexander Andrews from Unsplash.

Quantum sensors: Sensors inspired by quantum mechanics, offering significantly improved low-light performance and sensitivity.

Quantum dots are already used in TVs. They’re very small, light-sensitive dots that can be tuned to specific colors to capture them better—and without the need for a color filter. The promise is that they capture a wider range of colors. For us night photographers, this is music to our ears. They would potentially absorb more light, which would improve low-light performance while reducing noise. Bring them on!

Metamaterials: Lenses made from metamaterials could achieve incredible focusing capabilities and zoom ranges without bulky traditional lenses.

Metalenses, as some call them, use techniques borrowed from the semi-conductor industry to make ultra-thin lenses. They’re made of a silicon-on-glass substrate. What’s the benefit? Decreased aberrations, meaning they would produce sharper, clearer images. They can also be completely flat, making it possible to be even thinner. Additionally, they can be designed to filter specific wavelengths or create complex optical effects.

Could these eventually be used to photograph magnetic field sensing, radiation detection, or sound? Could they aid in fool-proof facial detection technology? Determine whether a stone in a ring is diamond or glass? Offer insightful information to health professionals?

This sounds intriguing, although if you thought your Z-Mount lenses were expensive, just wait!

Gemini AI's depiction of a night photographer 100 years in the future
Gemini AI’s depiction of a night photographer one hundred years in the future. Not ten, but one hundred. Why not?

Multi-spectral sensors: Capturing additional information beyond the visible spectrum, like infrared or ultraviolet, for more comprehensive data and creative possibilities.

Would these work hand-in-hand with metalenses? Regardless, the possibility for scientific research such as detecting diseases, studying geological formations, looking for lost cities in the jungle, or determining crop health seems fantastic. Of course, as you might readily imagine, this would also have many military applications, such as target tracking, land mine detection, ballistic missile detection, and more.

Bio-inspired sensors: Mimicking the human eye’s ability to adapt to different lighting conditions.

If the other crystal ball predictions seemed futuristic, this really pushes it into the realm of “Star Trek.” Bio-inspired sensors would mimic the way our eyes work, adapting to different lighting or changes in the scene. They would also be able to capture details in both bright and dark areas simultaneously. Would these ever be used for photography?

They may have some uses in robotics or driverless cars. While I’m not super-excited about driverless cars at the moment, the application for autonomous scenarios is certainly intriguing. And they could be used to warn drivers as well.

Computational photography

1708781520 304 AIs crystal ball Predicting future camera features in 2034 | Theedgesm

Advanced AI-powered editing: Built-in AI could offer real-time editing suggestions, noise reduction, and even object removal or replacement.

In April 2021, I wrote an April Fools Day article about Sony AI, a groundbreaking artificial intelligence camera that used a positronic neural network to offer suggestions such as posing people in a flattering manner. This camera was also equipped with Anti-Cliche AI, which would automatically drain the battery if the user photographed people on a couch in a field, sitting on train tracks, pointing flashlights at the Milky Way, or taking selfie photos of feet at the beach.

If Gemini AI is correct, you can begin referring to me as “Photostradamus.”

Compositional assistance: AI suggesting optimal framing and composition for different scenes.

See above!

Automatic editing and storytelling: Cameras automatically generating edited photos or even short videos with music and transitions.

Luminar Neo and iPhone’s Memory Mixes, introduced with iOS 15, somewhat approach this already.

Connectivity and interactivity

Drone. Photography by Dose Media on Unsplash.
Drone. Photography by Dose Media on Unsplash.

Seamless integration with other devices: Cameras seamlessly connected to AR/VR headsets for immersive photography experiences or drones for aerial shots controlled directly from the camera.

This almost seems like a certainty.

Live streaming and editing: Cameras with built-in capabilities for live streaming and on-the-fly editing, perfect for content creators.

This also seems like a certainty. With TikTok driving so much of what people do, I don’t see how this could not happen. Unless, of course, 10 years from now, everyone is so sick of social media that we’ve reverted to telling stories around the campfire. It would feel revolutionary. It would be similar to how, after listening to tons of pop music, someone singing while playing an acoustic guitar seems fresh.

Interactive features: Cameras incorporating gesture control or voice commands for hands-free operation.

I can’t wait to issue my camera a command to bring me a sandwich or walk over and attach itself to my tripod.

Hardware and design

Wearables. Photography by Igor Omilaev on Unsplash.
Wearables. Photography by Igor Omilaev on Unsplash.

Modular cameras: Flexible systems where different components like lenses and sensors can be swapped easily

Modular cameras already exist. Also, various Kickstarter campaigns and manufacturers have flirted with this idea for a while. I’m not sure if we will see major camera manufacturers getting on board with this any time soon. On the other hand, if in ten years we have metalenses, bio-inspired sensors, and quantum dot sensors working alongside “vintage” CMOS sensors, maybe they could surprise us!

Foldable or wearable cameras: Compact and convenient cameras that integrate seamlessly with clothing or accessories

Google, Meta, and others have been developing and flirting with wearable cameras for years. Remember Google Glasses?

If a professional camera can be folded up to fit in a pocket, that’s intriguing. We might even be able to get some strange shooting angles with this! However, wearable cameras, particularly ones that can be integrated into clothing or accessories, seem ready for abuse.

Brain-computer interfaces: Cameras controlled directly by thought, allowing for hands-free operation and intuitive interaction

If this ever occurs, I hope the connection is more reliable than Bluetooth. If we thought that bio-inspired sensors were approaching the stuff of “Star Trek,” this plants two feet firmly in that universe.

No buttons. No touchscreens. You control the camera by focusing your mind on specific actions such as taking a picture, zooming, or adjusting settings.

This currently needs some more steps to become reality. Current brain-computer interfaces are large and bulky.

3D and holographic capture: Cameras capturing realistic 3D models or even holographic projections of objects and scenes

Gemini closed with a mention of something that I have fantasized about for years: holographic photography. These cameras would record complex data about a scene and generate 3D models of objects or scenes, which could be viewed from various angles. This might be a possibility for virtual reality applications if Apple and Meta eventually have their way.

This, of course, veers into holodeck territory, which exists in, once again, “Star Trek.” I had always thought that if holodecks of that sort existed, no one would ever come out of there.

Bing AI's depiction of a holo-simulation. Could we be inching toward this in the future?
Bing AI’s depiction of a holo-simulation. Could we be inching toward this in the future?

Regardless, beyond the creative or odd aspects, there could be a lot of potential in the fields of science, education, and health care.

Gemini AI's depiction of a camera ten years in the future.
Gemini AI’s depiction of a camera ten years in the future.

What say you?

Gemini AI's depiction of photographers 100 years in the future
Gemini AI’s depiction of photographers one hundred years in the future. Apparently, the future one hundred years from now is very blue and hazy. That’s what I’m getting from these depictions!

What predicted technologies would you love to see implemented in cameras of the future? What do you foresee in cameras of the future? In what ways are you excited or concerned about future camera technologies? Leave your comments below!