How about an event where /e/ users shoot photos using the built-in camera app ie Open Camera & editing or processing those photos using only Free software like RawTherapee, darktable & GIMP. The photos are then uploaded here or using /e/ cloud & then the best photos are selected & used as the default wallpapers for /e/. The people who shoot have to give a tutorial of sorts about how they shot a particular photo & then edited or processed it while also saying which device they used & what settings they use on the daily in the camera app & also for the shot they took. This will help people who want to learn to use the camera app, the settings they can change to get better photos & how they may edit or process a photo further to make it even better.
10 posts were merged into an existing topic: Discuss about OpenCamera
A post was merged into an existing topic: Discuss about OpenCamera
Using already shot photos/screenshots, opening a photo on a phone then shooting that phone & doing it again and again until peak satisfaction is achieved & then using the last shot/screenshot so it looks like it goes till infinity & beyond.
I agree, my guess too!
You’re really close but that’s not the correct answer
there is only one single mirror or other loop back device necessary to achieve this kind of results, nevertheless i don’t see this kind of “effects” as very satisfying in an artistic/creative sense.
take a look e.g. at works of mario klingemann if you want do get impression, how intellectual “reflection” of new means in electronic photography and image processing my lead to exiting new fields of artistic work.
What’s the correct answer?
I never say it’s artistic. I have no pretention.
It’s just for fun and to start the photo event proposed by Elli.
Thanks for the reference to mario klingemann and no I didn’t use any mirror to do it.
Does someone maybe have an idea which mobile phone with e might deliver the best Photo/Video quality?
I want to know the exact same thing. We can fetch camera info from our device and post under the device section of this forum. example. I think it’s important the device supports RAW capture, then the foto camera app can process the RAW data into a picture.
It’d depend more on the editing/processing than the phone, its almost 80% editing/processing & only 20% is the camera/hardware/phone/device for getting a good photo assuming you know the basics of photography ie stuff like controlling shaking hands when shooting. The thing that depends on hardware is resolution, lens specifications, etc… Regarding best phone for photo/video which is supported by /e/ its devices like Essential Phone, Pixel XL 2, LG G5, OnePlus 5 & 5T, Samsung Galaxy S7, S7 Edge, S9 & S9+, Xiaomi Mi 6, Mix 2 & 2s & Poco F1 mostly because they have the best ISP (image signal processor) which is found on the best SoC (system on chip) like Snapdragon 800 series & Samsung Exynos equivalents.
no, i don’t think, we’ll get the significant information by this approach, because it’s very often caused by simple missconfigurations resp. necessary adaption efforts, that /e/ photo apps work worse than neccassary – i.e. can not find/use all cameras the phone etc.
for i long time i was thinking like you, and just wanted sufficient big sensels, good lenses and access to the RAW sensor data, to refine everything afterwards by wonderful tools like darktable, but recently i had to change my mind. many improvements concerning photography on mobile phones can not be reproduced by this traditional workflow. many of this machine learning based image processing techniques used in the best closed source apps available in this field right now, process synchronous input from multiple cameras and rather huge image sequences instead of just one single sensor readout. they often process an amount of image data, which could hardly written to an sdcard in realtime for later use by some kind of RAW processing software. and the results and capabilities of this class of apps (e.g. the research behind googles camera app) looks simply impressive. we therefore shouldn’t ignore them, just because there isn’t much similar powerful free open source camera software available till now.
Yes, thats the thing. With an iPhone you get live bokeh in your viewfinder but when using Google Camera app on any phone including Google phones you don’t get that, its gonna be processed after you take the shot even though recent SoC used by android phones is more powerful than iPhone 7 Plus SoC which introduced this. The reality is that it can be written in realtime RAW but it requires better quality code which for now is not seen with only a few exceptions like in the case of the iPhone & actual camera. An actual camera has a SoC that can’t even match today’s budget phone SoC but they do a better job because its quality code & it only has a single purpose its kind of like an ASIC. The reason that I believe that editing/processing is 80% responsible for good photos is because I see the results from Google Camera app & its good just on its own but we can make it just as good if not better by editing/processing the photo on our own without using AI algorithms. The thing that Google is doing is making edits/processing the photo as a person would but using AI & its good enough that people consider it as beautiful so its sort of an automation like applying a premade filter but not just the same filter instead there are multiple filters for different scenarios which we can select. Regarding the portrait mode bokeh, the same result if not better can be achieved by editing/processing. In iPhone the bokeh is more realistic as the things that are near are less blurred than the things that are far but with Google Camera app it makes everything blurry to the point it feels fake if you know about photography enough & the blur is same for things that are close or far. Processing input from multiple camera & other things are better in nonfree apps which is a fact & can’t be ignored even though its probably because they don’t release camera software as Free software & instead keep it as nonfree which means using different camera app than the one that came with your device means you don’t get the same quality nor can you use multiple cameras, etc…
The first thing to get good pictures is the phone supporting RAW read from the phone, right? This app shows the support level. Then a really good app has to be made …
the information reported by this app may be very missleading.
on my xiaomi mi a1 it will not report the existence of all available cameras on this phone. sure, this could be as interpreted as an indicator, to argue, that this device isn’t supported very well by /e/. but in fact it isn’t very hard for users to workaround this limitations and make better use of the actual hardware by little modifications of the default setting. but again: your approach isn’t able to report and compare this actual possibilities.
this differences in the actual behavior could be very likely caused by patent claims, but i don’t see any technical reason, why it shouldn’t be handled just the same by other devices as well.
this particular application of ML looks rather trivial nowadays, just like the mentioned optimization by scene guessing and many other ‘cosmetic’ refinements.
but that’s not the kind of innovation, which i would see as really beneficial for serious photo work. more advanced noise reduction, useful enhancements of the achievable image resolution, correcting motion and lens distortions by ML techniques etc. on the other hand are rather useful although they may not sound very spectacular. that’s more the kind of progress and real improvement, which i would emphasize concerning this modern mobile camera apps.
Sorry; I’m a little late but here is how I took my picture:
First I started a /e/ device and took a photo of the booting screen.
Then I opened photo of booting screen. I took first phone (the one who booted) and opened OpenCamera. Finally I took Screenshot. Then I open Screenshot and opened opened OpenCamera and took another screenshot.
I repeat last step using all the phone I had at this moment.