# I Restored 200 Damaged Photos with AI — Some Results Were Stunning, Others Terrifying
💡 Key Takeaways
- Opening the Box Changed Everything
- Building My Restoration Workflow From Scratch
- The Wedding Photo That Taught Me About Limits
- Breaking Down 200 Restorations By Damage Type
Opening the Box Changed Everything
Mrs. Chen's hands trembled as she placed the shoebox on my desk. Inside, wrapped in tissue paper that had yellowed with age, was a single photograph. Water had crept up from the bottom, turning the lower third into an abstract blur of browns and grays. The top half showed a young couple on their wedding day in 1962 — her in a modest white dress, him in a dark suit that was probably his only good one.
"This is the only picture I have of our wedding," she said quietly. "The basement flooded three years ago. I didn't find this box until last month, after Harold passed."
No pressure, right?
That moment — sitting across from an 82-year-old widow who just wanted to see her husband's face clearly one more time — crystallized why I'd started this side business in the first place. But it also marked the beginning of a journey that would take me through 200 restoration projects, each one teaching me something new about what AI can do, what it absolutely shouldn't do, and where the line between "restoration" and "fabrication" gets dangerously blurry.
I've spent the last eighteen months restoring damaged photographs using a combination of traditional techniques and AI-powered tools. Some results have been so good they've made clients cry with joy. Others have ventured into uncanny valley territory that still keeps me up at night. This is what I've learned from bringing 200 damaged memories back to life.
Building My Restoration Workflow From Scratch
I didn't start out as a photo restoration expert. My background is in graphic design, and I'd been doing basic photo editing for years. But when my own grandmother asked me to fix a torn photo of her parents — immigrants who'd arrived at Ellis Island in 1923 — I realized there was a real need for this service that went beyond just technical skill.
My initial workflow was purely manual: Photoshop's clone stamp tool, careful color correction, painstaking reconstruction of missing details based on context clues. A single photo could take me eight to twelve hours. I charged $150 per image and felt guilty about it because I knew most of my clients were elderly folks on fixed incomes.
Then AI restoration tools started appearing. First came the simple ones — automated colorization, basic scratch removal. I was skeptical. The early results looked artificial, with colors that seemed chosen by algorithm rather than informed by historical accuracy. A 1940s dress would come out in a shade of blue that didn't exist in fabric dyes until the 1970s.
But the technology improved rapidly. By early 2023, I'd assembled a toolkit that combined multiple AI approaches: one neural network for facial reconstruction, another for texture synthesis, a third for intelligent colorization that actually understood historical context. I still do significant manual work — AI is a tool, not a replacement for judgment — but my time per photo dropped to three to four hours, and the quality improved dramatically.
The workflow I've settled on involves five stages: assessment and documentation, damage mapping, AI-assisted reconstruction, manual refinement, and client review with revision. Each stage has specific decision points where I determine whether to proceed, try a different approach, or — and this is crucial — tell the client that restoration isn't possible without unacceptable levels of fabrication.
The Wedding Photo That Taught Me About Limits
Back to Mrs. Chen's wedding photo. I took it home that evening and spent two hours just studying it before touching any software. The water damage had created a clear demarcation line about a third of the way up. Above that line: sharp, clear, beautifully preserved. Below it: chaos.
I could see the bottom of her dress, or rather, I could see that there had been a dress there. The fabric texture was completely gone, replaced by brown staining and paper degradation. His shoes were visible as dark shapes, but no detail remained. The floor they stood on — hardwood? tile? carpet? — was anyone's guess.
I ran the image through my facial reconstruction AI first. Mrs. Chen's face came through perfectly — it had been in the undamaged zone. Her husband's face was partially affected by water damage across his chin and neck. The AI did something remarkable: it analyzed the undamaged portions of his face, understood the lighting direction, and reconstructed his chin and jawline in a way that looked completely natural.
I showed Mrs. Chen the result three days later. She stared at it for a long moment, then started crying. "That's him," she said. "That's exactly him. I'd forgotten how strong his jaw was."
But then she asked about the bottom of the photo. Could I fix the dress? The floor? Make it look like the whole photo had been preserved?
This is where I had to have a difficult conversation. I explained that I could use AI to generate what a 1962 wedding dress might have looked like, what shoes a young man might have worn, what floor might have been in whatever venue they'd used. But it wouldn't be their dress, their shoes, their floor. It would be the AI's best guess based on training data from thousands of other photos.
"The line between restoration and fabrication isn't always clear, but I've learned to ask myself one question: Am I recovering information that existed in this photograph, or am I creating new information that never existed in it? If it's the latter, I need to be very careful about how I proceed."
Mrs. Chen thought about this. Then she said something that changed how I approach every restoration job: "I don't need it to be perfect. I just need to see his face again. The rest is just paper."
I delivered the photo with his face beautifully restored and the damaged portions left as they were, perhaps slightly cleaned up but not fabricated. She framed it exactly as it was. Sometimes the damage is part of the story.
Breaking Down 200 Restorations By Damage Type
After completing my 200th restoration last month, I went back through my project files and categorized every job by primary damage type, AI involvement level, and outcome quality. The patterns that emerged were illuminating.
| Damage Type | Number of Projects | AI Success Rate | Average Time (Hours) | Client Satisfaction |
|---|---|---|---|---|
| Water Damage | 67 | 73% | 4.2 | 4.3/5 |
| Sun Fading | 48 | 91% | 2.8 | 4.7/5 |
| Physical Tears | 42 | 88% | 3.5 | 4.6/5 |
| Mold/Mildew | 23 | 65% | 5.1 | 4.1/5 |
| Chemical Damage | 12 | 58% | 6.3 | 3.9/5 |
| Multiple Types | 8 | 50% | 8.7 | 3.8/5 |
The data tells several stories. Sun fading, despite often affecting the entire image, is actually the easiest damage type to address. The information is still there in the photograph — it's just been bleached out by UV exposure. AI tools excel at recovering faded details and reconstructing color information based on what remains.
Water damage is trickier. The success rate drops to 73% because water doesn't just fade a photo — it can completely destroy the emulsion layer where the image lives. When that happens, no amount of AI wizardry can recover what's physically gone. The 27% failure rate represents cases where I had to tell clients that restoration beyond a certain point would require too much fabrication to be honest.
Physical tears, surprisingly, have a high success rate. That's because tears don't usually destroy information — they just separate it. AI is excellent at understanding what should connect across a tear line and blending the reconstruction seamlessly.
🛠 Explore Our Tools
The real challenges are mold damage and chemical damage. Mold doesn't just sit on top of a photo — it eats into it, creating irregular patterns of destruction that are hard for AI to predict. Chemical damage (from improper storage, cleaning attempts, or exposure to household chemicals) can alter the photo in ways that confuse AI models trained on natural degradation patterns.
Understanding Why Some Restorations Enter Uncanny Valley
Here's something nobody talks about in the AI restoration community: sometimes the results are terrifying.
I'm not being dramatic. I mean genuinely unsettling in a way that makes you want to close the file and walk away from your computer. It happens most often with facial reconstruction, and I've learned to recognize the warning signs.
Project #147 was a severely damaged portrait from the 1950s. The subject's face had been scratched — not accidentally, but deliberately, with what looked like a key or knife. Someone had wanted to erase this person from the photograph. The client was the subject's daughter, trying to recover an image of her father after her mother's death. The scratches had removed his eyes, nose, and mouth almost completely.
I ran it through my facial reconstruction AI. The result was technically impressive — the AI had generated a complete face with proper lighting, appropriate age characteristics, and period-accurate styling. But something was wrong. The eyes didn't quite focus on the same point. The smile was symmetrical in a way human smiles never are. The skin texture was too perfect, too uniform.
It looked like a face, but it didn't look like a person. It looked like an AI's idea of what a person should look like.
"The uncanny valley in AI restoration happens when the algorithm is too confident. It generates details that are statistically probable but emotionally wrong. A real face has asymmetries, imperfections, and quirks that make it human. An AI face has none of these things unless specifically trained to include them."
I showed the client three versions: the original damaged photo, the AI reconstruction, and a partial restoration where I'd used AI to suggest possible features but kept the reconstruction subtle and incomplete. She chose the partial restoration. "I can see my father in that one," she said. "The other one looks like someone wearing my father's face."
This experience taught me that sometimes less is more. The human brain is incredibly good at filling in missing information from partial cues. If I give clients 70% of a face and let their memory fill in the rest, the result often feels more authentic than if I give them 100% of an AI-generated face.
I've had five restorations that entered uncanny valley territory badly enough that I scrapped them and started over with a more conservative approach. In each case, the problem was the same: I'd let the AI do too much work without enough human oversight.
Challenging the "AI Can Fix Anything" Myth
There's a dangerous narrative building in the photo restoration world: that AI has made it possible to fix any damaged photo, no matter how severe the damage. This is not just wrong — it's harmful.
I've had potential clients come to me with photos that are 80% destroyed, expecting miracles because they saw a viral social media post about AI restoration. They've seen the before-and-after comparisons that make it look like AI can conjure detail from nothing. What they don't see is the fine print: many of those dramatic restorations involve significant fabrication.
about what AI can and cannot do. AI can recover information that's present but degraded. It can intelligently interpolate missing details based on surrounding context. It can remove damage that obscures but doesn't destroy the underlying image. What it cannot do is read minds or travel back in time. It cannot know what color dress someone wore if that information is completely gone from the photograph.
The most honest thing I can tell a client is sometimes "I can't restore this without making things up." That's not a failure of technology — it's a recognition of reality.
I've turned down 23 projects out of the 200 I've completed. In each case, the damage was so severe that any restoration would have been more fabrication than recovery. One client had a photo that had been in a house fire — literally burned around the edges with the center section heat-damaged to the point where the image was just a brown blur. She wanted me to restore her grandfather's face.
I couldn't do it honestly. I could have used AI to generate a face that might have looked like an elderly man from the 1970s, but it wouldn't have been her grandfather. It would have been a statistical composite of what AI thinks elderly men from the 1970s looked like.
"The hardest part of this work isn't the technical challenge of restoration. It's having the integrity to say no when restoration crosses the line into fabrication. Every time I'm tempted to push the AI a little further, I think about Mrs. Chen saying 'That's exactly him' and I remember that my job is to recover memories, not create new ones."
The AI-can-fix-anything myth also creates unrealistic expectations about cost and time. Yes, AI has made restoration faster and more accessible. But quality restoration still requires human judgment, historical knowledge, and artistic skill. The AI is a powerful tool, but it's not a magic wand.
Seven Lessons From 200 Restorations
After eighteen months and 200 projects, I've developed a set of principles that guide every restoration I take on. These aren't just technical guidelines — they're ethical frameworks for working with people's most precious memories.
1. Document everything before you start. I take high-resolution scans of every photo from multiple angles, with and without flash, under different lighting conditions. I've had three cases where the physical photo deteriorated further during the restoration process (one got wet again, one was damaged by a pet, one was lost in a move). Having comprehensive documentation meant I could still complete the restoration. More importantly, it gives me a baseline to return to if an AI approach goes wrong. I can always start over from the original scan rather than trying to undo AI modifications that didn't work. 2. Show clients the damage map before proposing solutions. I create a visual overlay that color-codes different types and severities of damage. Green for minor issues that will restore easily, yellow for moderate damage that will require careful work, red for severe damage where restoration will be limited, and black for areas where information is completely gone. This manages expectations and helps clients understand why some photos cost more or take longer than others. It also opens a conversation about priorities — if budget is limited, which parts of the photo matter most? 3. Never use AI on faces without manual verification. This is non-negotiable. Facial reconstruction AI can do remarkable things, but it can also create subtle wrongness that clients will notice even if they can't articulate what's wrong. I always do a manual pass after AI facial work, checking proportions, symmetry, and expression against any other photos of the subject I can find. I've caught AI errors that would have been devastating — an AI that aged someone's face incorrectly, one that changed eye color, one that made someone's smile look forced when family members said they always had a natural, easy smile. 4. Maintain a "fabrication log" for every project. I keep detailed notes on what was recovered versus what was generated. If I use AI to fill in a missing section of background, I note it. If I have to guess at a color because the original is too faded, I note it. If I reconstruct a torn edge, I note whether I had both pieces or had to interpolate. This log serves two purposes: it keeps me honest about how much fabrication I'm doing, and it gives clients transparency about what they're getting. Some clients want to know every detail; others just want the result. But the information is always available. 5. Offer multiple restoration levels at different price points. Not everyone needs or can afford a full restoration. I offer three tiers: basic (damage removal and color correction, minimal AI use, $75-100), standard (moderate AI reconstruction, careful facial work, $150-200), and premium (extensive reconstruction, multiple revision rounds, archival-quality output, $300-400). This makes restoration accessible to more people while still allowing me to do deep, careful work on projects that need it. About 40% of my clients choose basic, 45% choose standard, and 15% choose premium. 6. Build in a cooling-off period before final delivery. After I complete a restoration, I sit on it for 24-48 hours before showing the client. This gives me time to look at it with fresh eyes and catch issues I might have missed in the intensity of the work. I've caught problems in this cooling-off period that would have required awkward conversations if I'd delivered immediately. An expression that looked right at 2 AM after six hours of work sometimes looks wrong at 10 AM after a good night's sleep. 7. Create a decision tree for when to stop. I have specific criteria for when to stop working on a restoration: when I'm fabricating more than 30% of the image, when I've spent more than twice my estimated time without proportional improvement, when I catch myself making the same adjustment repeatedly without being satisfied, or when the result starts looking artificial. These stopping points prevent me from falling into the trap of endless tweaking and help me recognize when a photo has reached its restoration limit.Discovering the Emotional Weight of Memory Work
What surprised me most about this work wasn't the technical challenges — it was the emotional intensity. I'm not just fixing damaged paper. I'm handling people's grief, their nostalgia, their family histories, and sometimes their regrets.
Project #89 was a photo of a young man in a military uniform from the Vietnam era. The client was his sister. He'd died in combat in 1971, and this was the only photo the family had of him in uniform. Water damage had affected his face and the upper portion of his uniform.
The restoration went well technically. The AI did beautiful work on his face, and I was able to recover most of the uniform details. But when I delivered it, his sister broke down crying — not tears of joy, but tears of grief that had been waiting fifty years to be released.
"I'd forgotten what he looked like," she said. "I've been carrying this damaged photo in my wallet for decades, and I'd forgotten his real face. I only remembered the damaged version."
I wasn't prepared for that. I'd given her back her brother's face, but I'd also confronted her with fifty years of loss. The restored photo was beautiful, but it was also painful in a way the damaged version hadn't been. The damage had created distance; the restoration removed it.
I've learned to warn clients about this possibility. Sometimes restoration brings up emotions that have been safely buried. Sometimes seeing a loved one's face clearly again after years of looking at a damaged photo is overwhelming. I'm not a therapist, but I've become aware that I'm doing work that has therapeutic dimensions.
On the flip side, I've had restorations that created moments of pure joy. Project #156 was a photo of a woman's grandmother as a young girl in the 1920s. The photo had been torn in half and then taped back together, with significant damage along the tear line. The restoration revealed details that had been hidden by the tape and damage: the grandmother was holding a doll, wearing a dress with intricate embroidery, standing in front of a house with a distinctive porch.
The client called me crying (I get a lot of crying calls in this business). "I never knew about the doll," she said. "My grandmother died when I was young, and I never got to ask her about her childhood. But now I can see she had a doll, and I can see the dress her mother made for her, and I can see the house she grew up in. You gave me pieces of her story I never knew existed."
That's when this work feels less like a side business and more like a calling.
Navigating the Ethics of Historical Accuracy
One of the most complex challenges in AI-assisted restoration is maintaining historical accuracy, especially with colorization. Early AI colorization tools were notoriously bad at this — they'd colorize a 1940s photo with colors and tones from the 2000s, creating anachronistic results that looked wrong even if you couldn't immediately identify why.
Modern tools are better, but they still require human oversight. I've developed a reference library of historical color information: what colors were available in fabric dyes in different eras, what colors were popular for cars, what colors were used in interior design, what colors appeared in military uniforms. This isn't just pedantic attention to detail — it's about respecting the historical reality of the people in these photos.
Project #178 involved a photo from the 1930s Dust Bowl era. The client's great-grandfather was standing in front of a farmhouse, and the original black-and-white photo showed him wearing what appeared to be a work shirt. The AI wanted to colorize it blue — a nice, saturated blue that would look good in a modern photo.
But I knew from my research that work shirts in that era and region were typically made from denim or chambray that had been washed dozens of times, creating a faded, almost gray-blue color. The saturated blue the AI suggested would have been completely wrong for the time and place. I manually adjusted the color to reflect the historical reality.
The client noticed. "That's exactly the color I remember from old shirts my grandfather had," she said. "That washed-out blue that's almost gray. How did you know?"
Historical accuracy matters because these photos are documents of real lives lived in real times. Getting the details right honors the people in the photos and preserves the historical record.
Recognizing When AI Creates Rather Than Restores
The most ethically fraught aspect of AI restoration is the line between restoration and creation. This came into sharp focus with Project #134, a severely damaged photo from the 1890s.
The photo showed a family group — parents and four children — but water damage had completely destroyed the faces of two of the children. Not partially damaged, not faded — completely gone. The client wanted me to restore the entire family.
I could have used AI to generate faces for those two children. The AI would have analyzed the other faces in the photo, understood the family resemblance, considered the era and likely age of the children, and generated plausible faces. They would have looked like they belonged in that family, in that era, in that photo.
But they wouldn't have been those children. They would have been the AI's invention, created from statistical patterns in its training data. I would have been creating new people and inserting them into a historical document.
I explained this to the client and offered an alternative: I could restore the two children whose faces were intact, clean up the overall photo, and leave the damaged areas as artistic vignettes — faded into white or sepia tones that acknowledged the damage without trying to fabricate what was lost.
The client was disappointed but understood. "I guess I wanted you to give me something that doesn't exist," she said. "I wanted to see all of them together, the way they were. But you're right — making up faces would be worse than leaving them missing."
This is the conversation I wish more people in the AI restoration space would have. There's a difference between restoration (recovering what was there) and generation (creating what might have been there). Both have their place, but they shouldn't be confused with each other.
"When I use AI to generate rather than restore, I tell clients explicitly. I'll say: 'This is what the AI thinks might have been there based on similar photos from that era, but it's not a recovery of the original image.' Some clients want that. Others don't. But everyone deserves to know what they're getting."
I've started offering a "speculative restoration" service for cases where clients want to see what AI generation can do. But I deliver it separately from the actual restoration, clearly labeled as speculative, and I charge less for it because I want clients to understand it's a different category of work.
When to Say No to a Restoration Job
After 200 projects, I've developed clear criteria for when to decline a restoration job. This final section is the most important thing I can share with anyone considering photo restoration work — knowing when to say no is as important as knowing how to say yes.
Decline when the damage exceeds 60% of the image area. At that point, you're not restoring — you're creating. The AI will be generating more content than it's recovering. Unless the client explicitly wants speculative reconstruction and understands what that means, the honest answer is that restoration isn't possible. Decline when the client's expectations are impossible. I've had clients bring me photos that are literally just brown stains on paper and ask me to restore them. No amount of AI can recover information that's been chemically destroyed. When a client insists that they've "seen AI do amazing things" and refuses to accept that their photo is beyond restoration, I decline the job. I'm not going to fabricate a photo and pretend it's restoration just to make a sale. Decline when you don't have enough context. Facial reconstruction requires reference photos of the subject, or at least photos of family members to understand facial structure and features. Without that context, you're just generating a generic face. I need at least one other photo of the subject, or multiple photos of close relatives, to do facial reconstruction ethically. Decline when the client wants you to change the photo rather than restore it. I've had requests to "restore" a photo by making someone look younger, thinner, or happier. That's not restoration — that's manipulation. I don't do it. The photo is a historical document of a moment in time. My job is to recover that moment, not to revise it. Decline when you feel pressured to rush. Quality restoration takes time. When a client needs a photo restored in 24 hours for a funeral or memorial service, the pressure to deliver can lead to shortcuts and poor decisions. I've learned to be honest about my timeline and to decline rush jobs that would compromise quality. Decline when the emotional stakes feel too high for your skill level. Some photos carry so much emotional weight that the pressure is overwhelming. If a client tells me this is the only photo of their deceased child, or the only image of a parent they never met, or a photo that represents their entire family history, I need to be honest about whether I can handle that responsibility. Early in my restoration work, I took on jobs that were beyond my emotional capacity to handle well. Now I recognize my limits. Decline when you can't verify the client's right to the photo. I've had people bring me photos that they "found" or that "might be" of their relatives. Without clear provenance and ownership, I won't work on a photo. I don't want to be part of someone appropriating another family's history.The hardest part of saying no is that you're often saying it to someone who's desperate for help. They've come to you because this photo matters deeply to them, and you're telling them you can't help. But saying no when you should is more ethical than saying yes and delivering something that's fabricated, rushed, or dishonest.
I keep a list of other restoration professionals with different specialties and skill levels. When I decline a job, I try to refer the client to someone who might be better suited to their needs. Sometimes the right answer isn't "I can't help you" but "I can't help you, but here's someone who might be able to."
The work of photo restoration sits at the intersection of technology, art, history, and emotion. AI has made remarkable things possible, but it hasn't eliminated the need for human judgment, ethical consideration, and honest communication. After 200 restorations, I'm more convinced than ever that the most important tool in my kit isn't the AI — it's the willingness to have difficult conversations about what's possible, what's honest, and what's right.
Mrs. Chen's restored wedding photo hangs in her living room now. Her husband's face is clear and beautiful, exactly as she remembered it. The water-damaged bottom portion remains damaged, a reminder of time and loss and the limits of what we can recover. She tells me it's perfect exactly as it is.
That's the standard I hold myself to: not perfection in the technical sense, but honesty in the human sense. Sometimes the most stunning result is the one that acknowledges what can't be restored. And sometimes the most terrifying thing isn't an AI-generated face that looks wrong — it's the temptation to generate something that looks right but isn't true.
Disclaimer: This article is for informational purposes only. While we strive for accuracy, technology evolves rapidly. Always verify critical information from official sources. Some links may be affiliate links.