Can a “camera” have no lens or sensor? With Paragraphica you can. They’ve developed a strange “context-to-image” device that interprets the place around it.
So, um, what is it?
Paragraphica is a “context-to-image” “camera” that uses location data and artificial intelligence to determine a “picture” of a specific place and moment. The “camera” is a physical prototype, but it’s also a virtual “camera” for you to try. If the server has not crashed!
Dutch engineer Bjørn Karmann has developed a ‘camera’ straight out of a science fiction film. It has no lens. However, it has a strange red cobweb-like attachment on the front. This device is inspired by the star nose mole. This animal uses its nose to navigate its environment. Since the camera takes no light, this is a perfect metaphor.
The finder displays a real-time description of your location. When you press the button, the “camera” creates what they describe as a “scintigraphic representation of the description”. Three dials on the top of the device allow you to control the data and AI parameters that affect the appearance of the image.
What the hell is scintigraphy?
Of course I had to look up “scintigraphy.” The National Cancer Institute describes it as “a procedure that produces pictures (scans) of structures in the body, including areas where there are cancer cells.” Wikipedia states that it is also known as a gamma scan and is a diagnostic test in nuclear medicine.
What do the images look like?
Below are some examples. You can also use them virtual “camera” to create an image.
What data does the device use to create the images?
According to their website, they collect using open APIs (Application Programming Interfaces, software that allows two applications to communicate with each other) to determine location, weather, time of day, and nearby places. It combines all of those and then puts together a paragraph describing the place and the moment. The device then converts that paragraph into an image. It’s actually an interpretation of how the AI model “sees” the place at that moment.
Says Karmann, “Interestingly, the photos capture some of the memories and emotions of the place, but in an eerie way, as the photos never look exactly like where I am.”
The device uses the Raspberry Pi 4 processor to run the text-to-image generator, Stable Diffusion API, Noodl and Python Code, according to Karmann’s website.
What do the watch faces do?
- The first dial controls the radius of the area the device is searching.
- The second dial controls what is a kind of ‘film grain’.
- The third dial determines how closely the AI follows the paragraph. Karmann uses the analogy of sharpening or blurring in a camera.
Why do you keep using quotes around “camera” and “photo?”
I’ve mostly thought of a camera as a device that captures light. Whether digital or analog, this is what cameras have typically done. This is how Merriam-Webster, Oxford and Wikipedia generally describe it. Paragraphica may take pictures, but it’s not really a camera.
And it’s not pictures. After all, a picture is taken with a camera, and the name itself indicates “light.”
Is this split hair? For me, no. What is your opinion?
Is this also something that will inspire you to experiment? Let us know in the comments.