I've worked on this problem quite a bit.
The hard thing to accept is that there can be large areas in an image where there isn't a significant difference between the foreground and background. In other words, the raw data doesn't contain enough information to detect an edge there and no segmentation algorithm will find what is not there in the data.
Humans see edges "semantically". We know there should be one there and so we "imagine" one.
I work on image segmentation and recognition and my code cycles between those two goals. I segment and look for noisy and smooth edges, then I postulate where there should be a smooth edge (for example, if you recognize a smooth edge "parallel" to a noisy edge, check if the noisy one could be smoothed "similar" to the other one). The end result is that I can separate the foreground from the background but I don't try or hope to recover the edge that a human would find. My segmented images have the uncanny valley problem. The edges are correct in some areas and slightly off in other areas. The image looks real and fake :)