Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Computational Photography with Luminar 4 (skylum.com)
88 points by deathtrader666 on Oct 23, 2019 | hide | past | favorite | 62 comments


If you're thinking of pre-ordering Luminar 4, I'd strongly urge you to wait for actual reviews or try the demo yourself.

I purchased Luminar 3 and it was a sluggish, unstable mess, especially on Windows. They promised many features like a DAM, which was still missing features like the ability to edit metadata. But hey, it checked off the marketing checkbox, right?

They're prioritizing adding new features to generate publicity and hype while ignoring actual speed, stability, and usability.

Only the comment sections of photography websites actually comment on speed, example: https://www.dpreview.com/news/3286739394/luminar-4-available...


Yup, listen to this guy. I have the top of the line MacBookPro and Luminar is not even barely usable even with nothing else running on it. The features are ok, but it's basically an unusable product.


If they are CPU throttled why not advertise egpu or igpu as requirement?

I’m more curious if the results are as consistently good as advertised, as usually NN results are cherry picked for good ones.


Odd. Works relatively well on my 2014 MBP. (With LR running in the background, too)

I mean, it goes further to your point -try it out. It seems that it depends _very_ much on the use cases.


I mean, if it's using DNN then you're going to be pretty much shit out of luck for perf unless your machine has an nvidia graphics card right?


If you’re using pretrained models then they should run ok. It’s the same reason why Tensorflow runs on phones: to run models, not necessarily to train models. DNNs take a lot of time to train.


TF runs at seconds-per-frame speeds on phones without dedicated accelerators.

Even inference is going to be slow when working on a 20 megapixel image. That's as much work as a 1220 sample batch at 128x128 resolution.


I've got Luminar 3 on a 27in Retina iMac and it performs adequately. Some of its "AI" filters are quite impressive. The Sky filter, for example, really finds the sky, and only the sky, in a picture and changes it. To get the same result in PhotoShop I'd have to spend half an hour making a carefully masked Curves layer.

OTOH, the absence of IPTC metadata editing means I can't use it to manage my image library, because I rely heavily on IPTC keywords.

Also, Luminar has a very strong "hands-off" attitude toward the file system and the original image files. So all the fancy improvements you make are held in the app's catalog. If you want an actual file containing your improved image you have to "Export" it. You can't use Luminar, as you could use Adobe Bridge, to move or copy image files from folder to folder.


As a general rule, never pre-order stuff and always wait for reviews.


Does this qualify as 'computational photography'? My interpretation of that term is that it is more about using non-standard optics processed with a computer, than it is about applying deep learning and other AI based techniques which seems to be what this is. Otherwise, this looks cool ;) I am glad to see AI image processing becoming more accessible.


Computational photography is basically anything that uses a digital process for a chemical or optical technique.

You don't need to be using non-standard optics or deep learning AI techniques. It can be as simple as averaging a bunch of pictures together. HDR is computational photography, as is a panorama.

I'm sure Luminar 4 has similar capabilities, but all they show is dodging and burning, beauty filters, and some sky replacement. Those technically fit the definition, but often aren't thought of as computational photography.


> Computational photography is basically anything that uses a digital process for a chemical or optical technique.

By that definition, every digital camera/phone is computational photography


By "using a digital process", I mean altering the image rather than simply capturing it.

However, every single camera phone I've ever had had PLENTY of computational photography built into it.


It is. Browsing pictures maybe not, but certainly capturing and transforming.


Any digital photography is by definition computational.

You can make an argument that using deep learning to affect the sky is “more computational” in a way than a “basic” CCD to JPEG pipeline, but the boundaries become fuzzy and arbitrary.


Computational means the image was computed as opposed to being revealed.

It is the same difference as scanning and copying book text with a different font vs translating and rewriting text. You would say that the book that was scanned and OCR-ed is basically the same while translated and rewritten is no longer exactly the same text.

The same is with the image. Making a photo is preserving an image. We can do couple of different transforms like sharpening or converting to black and white, but the image is basically the same content.

On the other hand rewriting the image like changing the weather, removing people from background or making the subject look younger changes the content, the meaning of the image.


“Revealed” implies that there is some sort of single source of truth for the image (as in your book example, where the source of truth is the original book, to which transcriptions could be compared for accuracy).

That is not the case for photography.

In film photography, the “latent image” is mere fiction - there is no single image waiting to be revealed. What there is is a bunch of silver salts suspended in gelatin reacting to temperature, electromagnetic radiation, etc. When you want to turn this into an image that the human eye can make sense of, you make them react further with a specific developer/fixer at a specific dilution, in a controlled environment. Changing any of these parameters will lead to a different image, well after the initial exposure was made.

In digital photography, you have a set of charge readings, that you then turn into colored pixels following an arbitrary process that can be as complex as you want it to be. You can merely average nearby values and use the result, or run them through a neural network designed to <make colors more accurate | make people look younger | replace trees with sharks | whatever you want> and use the result. Either is a computational process.

Now we can certainly draw semantic distinctions, like “this person wasn’t in the picture when it was taken”, or “this person had a pimple on their nose that was removed”. But as mentioned earlier, these are fuzzy and arbitrary. Is a long shot exposure of the Milky Way a “rewritten” image? What about an HDR image? What to make of early digital photography features like red eye removal?

(Answer: it doesn’t matter, all photography is the result of a chemical or computational process, and there is no such thing as a “true” image)


> Any digital photography is by definition computational.

That's not what "computational photography" means to most. For example, the QuickTake is not considered to have done computational photography.

Computational photography uses computation to go beyond what a simple lens → sensor → sensor-to-image algorithm (like a demosaicing algorithm) can do.


Certainly most digital cameras use far more sophisticated sensor-to-image algorithms.

Not to be outdone by in camera processing, photographers will opt to shoot in RAW and have PC's do the conversion/post-processing, which in many cases is even more sophisticated algorithms.


What determines whether a particular “sensor-to-image algorithm” qualifies as “computational photography” or not?

(keeping in mind that we are really talking about many different algorithms running at across the hardware stack, ie likely distributed across multiple chips)


Luminar is really neat. I've been using Luminar 3 for awhile, and it does some really cool stuff -- but 3 didn't really live up to it's hype.

Don't get me wrong, I use it sometimes to clean up and enhance my images, but really it's not so dramatically different from other tools.

Some of the AI features are really slick, but not quite as magical as the demos suggest. Or maybe you just have to actually spend 100s of hours to learn to use it effectively, and I'm just a n00b.


A big problem with these new AI techniques as applied to photography is that the algorithms tend to be very opaque, and typically gives very little control to the user.

You can get better over time at using the dodge/burn tools, or selection masks, or color curves, etc in Photoshop - and use them at very granular, minute levels. If the final result is not up to your expectations, you know that it’s due to your own doing and you can try again, get someone to help you, watch a tutorial, etc.

However, all these magical AI sliders give you very little insight about what’s actually going on behind the scenes, and typically only expose a few vague sliders for the photographer to manipulate, often in an all-or-nothing fashion. If you like what cranking up the "AI Skin Defect Removal" slider does to the texture of the nose but not the tone of the cheek, there's no much you can do to address that.

While it makes for cool demos, and the appeal of “one click and your boring sky becomes picturesque” is clear for an entire category of users, this makes them pretty boring and limited as creative tools.


I've used Google's Snapseed, which has very similar controls and aims to streamline a lot of "photoshopping". It's free (though mobile only) and works great, I've had _a lot_ of fun just playing around, changing the distance between my eyes, giving myself an ogrish forehead, turning a smile into a frown, and in general changing anything and everything about the photos I played with.


If you want to change the shape of a face Photoshop has face aware liquify since a while back. I'm guessing a lot of photographers already have Photoshop.

https://www.youtube.com/watch?v=2zhgvNfJTnM


Am I alone thinking the originals are way better than the processed versions, even in the demo on that link? Somehow all this excessive tone filtering, "smart" unsharp masking, artificial noising and noise-filtering - all that computational stuff - takes the life out everything? The dynamic range is narrow. Stuff is compressed into flat mess of colors, similarly to how they compress audio these days. Huge "ringing" artifacts on contrasting edges. Gross. How is that elevated photography?


Very cool concept, but the video copy made me laugh out loud.

"Hours of learning" for your hobby.... God .... anything but that!!


My experience with Luminar 3 was weird. It was one of those pieces of software which looks great in screenshots and promo materials and then disappoints you profoundly when you try it out. It just didn't produce the same quality of results as Lightroom, nor was it as easy to work with as Lightroom. I really wanted to like it too, I'm not a fan of subscription software, and the best alternative to Lightroom (Capture One Pro) is either $300 for a license, or $20/mo, both of which sound pretty pricey for hobbyist use. After a couple of weeks of trying I was back to Lightroom.

So before you rush to plop down your credit card for Luminar "preorder", I encourage you to wait for a trial version and try it out. There, I saved you $99. You're welcome.


I know this is just automating how people have been touching up photos for decades, but IMO it is still a bit sad, especially when it comes to touching up people's faces. God forbid we should see each other as we are, slightly yellowed teeth, pores and all.

Again, I know this has been going on for decades, but I still worry that by automating more of it, it will lead to a bigger degree of "sameness" in photos.


I used to think this way too, but I’ve since come around to a different way of thinking.

Unless photos are taken for documentation, it’s more important for the photo to capture the feeling they photographer is trying to convey. 5 minutes after you talk to someone, you’re not going to remember every single blemish on their face; you’ll remember the general shape and a few distinguishing features. I would much rather the photo match the memory than the fact.

Of course, this can and has been taken too far in some instances, but I see no problems with removing a pimple and whitening teeth to more warmly represent (and therefore remember) my subject.


I think the problem is that since portrait retouching is usually so subtle, you actually start confusing it with real life. People start subconsciously thinking that this falsity is how you should look.

And to clarify, it's one thing to hide a pimple, but when it comes to the core features of one's face (e.g. "face slimming", pore smoothing, even undereye color IMO) you start distorting who that person actually is.


Is Luminar 4 a suitable Lightroom replacement?


It says on the site that it's usable as a LR plug-in too.


The "Luminar Libraries" feature provides Lightroom-like photo management, but based on Luminar 3 reviews it's probably not going to be as rich as Lightroom's. (I've pre-ordered Luminar 4 in hopes that it's good enough that I can ditch Lightroom).


There's a big vendor lock-in thing going on, isn't there? I've got 10s of thousands of images cataloged in LR. It would be hard to move away from that and re-index all the images again.


It also works as a plugin to Apple Photos, allowing you to use Photos to organize, and Luminar to edit them.



Not great that the demo video makes a portrait "more beautiful" partly by lightening the skin tone (at 43 seconds in).


I think the effect is really about improving contrast to make features stand out. It's effective even if someone is already lily white. Nothing to do with race imo.


Unfortunately a lot of these models are trained more on white faces and end up lightening everyone's skin as a result. My photographer friend is constantly fighting with photo-retouching tools to not whitewash their subjects.


I think you're right. But I still think they should change the video. People (myself included) are understandably sensitive to potentially racism embedded in AI.


At the presale price, its worth the $99, even just to experiment with it and see how well it does.


ugh, horrible, this is as bad as LED stage lighting


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


What is?


It’s interesting that most of the demonstrated applications are examples of how to lie with photos. Images alone can’t be trusted, but it makes me a little sad the more readily these tools are made available. Oh look, a program based around making photos that lie that’s cheap. I suppose I’ll have to start defaulting to trusting nothing even more. It’s not really a direction I’d like public discourse to have to go in, if we could collectively agree to behave a little.


It's almost impossible to draw a bright line here. All photography, even film-based, involves some amount of interpretation. Different choices of film, or various options in Lightroom, Luminar, or whatever, entail changing the appearance of the output at least in the dimensions of brightness, contrast, saturation, white balance, and so forth. You can't not make a choice, you're always going to be imposing some artistic interpretation.


There is a huge difference between changing global variables like film speed, aperture etc, and tools like these that affect individual pixels, and even can create new content like the clouds. It's cool that it exists but I find it much less interesting than traditional photography where you only have very limited and indirect control.

It's very similar to wine or whisky vs soft drinks. When you're making whisky you can select the ingredients, vary the process, age it in different casks etc but you can't add spices. If you're making a soft drink or something similar you can do whatever you want, just like in photoshop photography. Sure it's much more flexible but that probably also means that it will never be seen as quite as refined.


I find it much less interesting than traditional photography where you only have very limited and indirect control.

If you ask the average person on the street to name a photographer, probably the only answer you'll get is Ansel Adams. He's best known for his stunning imagery of America's national parks. I don't think there's anyone who would claim that his portrayals of them are inaccurate in any way, certainly not that they're lies.

Yet Adams was a master of darkroom manipulation. Part of what makes his portfolio so striking is his deft use of techniques like dodging and burning. This is certainly a bolder manipulation that something like aperture. He was doing this on film in the early-to-mid 1900s, so I think it's safe to say that this recognized master of traditional photography was already extensively manipulating imagery in many of the same ways that the unaware are currently decrying.


My point was not about accuracy or truth, it’s about me finding art forms more interesting where the control is less direct. Even what you describe still sounds very far from the pixel level control you get with photoshop.

I’m not sure who is ‘unaware” of what but I do think that in general things that are more strictly “regulated”, where you have less freedom, like wine making, are more highly regarded. Maybe there are lots of exceptions, I don’t know. Perhaps it’s just because old art forms tend to be less freely controlled, and by their nature have higher status.

But with photography, are there any photoshop magicians that are as highly thought of as this Anselm Adams?


I agree although news organizations try to have bright line rules as much as they can. You're right that there is still a lot of flexibility within those lines but certainly some manipulations that would generally be considered fine in an art photo (e.g. editing out distracting objects that aren't relevant to the main subject) are strictly verboten in photojournalism.


And you're right that photojournalism attempts to define some lines here.

I'm a serious photographer. For myself, I'll create any imagery I want - after all, I'm not trying to convince a jury of something objectively true. The line that I personally choose to draw is to make it something that could be true - I won't do the crazy oversized full moons, for example. But that doesn't mean that nobody else should create such an image.

But I'll go back to my original claim, that EVERY image involves interpretation. The data coming off your camera sensor isn't readily interpreted by the human eye at all. It needs to impose some transformation to get to a visual representation.

Looking at it from the other way around, we're all aware that our own eyes are lying to us. We interpret brightness and shades relative to what we see around them, so just being in different lighting will make us see things differently. It's physiologically impossible to create an image that will always be seen the same, even by a single pair of eyes.


Even if a thin, strong line can’t be drawn, it appears that this software is clearly on the wrong side of the line. It isn’t leveraging AI to create detail not captured by the camera that should be there. It’s using AI to alter the captured content to not be representative of what was there, ie lying. Context is important. A photo is expected to be the truth in most contexts.


A photo is expected to be the truth in most contexts.

"What is truth?"

I shot a music festival this past weekend, and I'm in the process of processing it in Lightroom. Part of what I'm doing is to make images that the artists can use for their own marketing. So I'm making various refinements to the images like making face a bit brighter and sharper, and smoothing out skin blemishes, and maybe whitening teeth.

Am I lying to people by creating an image that is brighter and sharper? Is it going too far to make the skin look smoother?

And even if we grant that I've changed beyond what the optics delivered, that's only part of the experience. Is it out of line for me interpret a bit to try to carry what's being lost from the rest of the senses - the fact that there was fantastic music all around, friendly people, and all that? Maybe a brighter, sharper face is a legitimate way for me to depict the gestalt of the experience.


It’s not only a lie, it’s a lie made for profit. Someone looks at that photo and thinks “this is what it was like”, because that is what the photo is supposed to convey. Instead it’s an oversold fantasy. Is something okay just because everyone does it?


Instead it’s an oversold fantasy. Is something okay just because everyone does it?

I suspect you didn't even read my reply.

First, I never made any argument along the lines of "everyone does it". I don't know why you include that in your reply to me.

Second, the final paragraph of my reply specifically addresses whether it's "oversold". There's much more to an experience than can be seen from a photograph of the scene, and I think it's fair to try to capture some of that in the final image.

Finally, "it’s a lie made for profit." No, this is outright false. In fact, the music festival I was shooting was entirely free. Nobody paid a penny to see the performances. And the musicians performed because they love music and want to share it; they weren't paid a penny either. You're letting your own prejudices bias your judgment.

EDIT: and just to round out the not-for-profit thing, I was a volunteer, too. My own time, my own camera, my own computer and software. I love the music and I love photography. I do this because I want to give something back to the artists who are giving me their music.


Yeah, nobody ever removed/added/altered content in a darkroom. Completely unheard of. Or framed a picture just so. Or chose the most flattering angle </sarcasm>

Photos have always been edited. This is putting technology into the reach of a few more people, but it's only "lying" if you believe photos represent some absolute truth. They really haven't since almost the beginning of photogrophay

I'd suggest looking up "Self-portrait as a drowned man" to see how far back that reaches: http://thenonist.com/index.php/thenonist/permalink/self_port...


Did you intentionally miss my point or do you need another explanation? I want to know before I spend effort reframing my thoughts.


Photography is art. It is not meant to be a realistic as-is interpretation of the scene, rather a representation that the artist (photographer) wants you to see. It has always been like this, even before computational photography was a thing. Photos would be boring if they were all straight out of camera shots.


Photography runs the gamut from straight from the camera to heavily manipulated. Straight out of the camera shots are pretty popular these days:

https://www.amazon.com/Instant-Cameras/b?ie=UTF8&node=291227


> Photos would be boring if they were all straight out of camera shots.

If they're boring photos, sure. Great photos are great photos.

http://www.vivianmaier.com/


No photo is a “straight out of camera shot”. When dealing with film, you make decisions about how to develop the film, print it, etc. You can make dramatically different results from the same latent image.


> No photo is a “straight out of camera shot”. When dealing with film, you make decisions about how to develop the film, print it, etc. You can make dramatically different results from the same latent image.

Sure, and additionally some decisions will be made for you depending on the underlying technology (e.g. film chemistry).

By "straight out of camera shots", the person I'm quoting meant a natural or realistic style, where images accurately (if not perfectly) represent what the scene looked like when the photographer took the picture. That absolutely doesn't mean "boring" — photos don't have to be over-manipulated or -stylized to be thrilling.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: