Earlier in the year, Gary McMaster, a colleague in Australia, contacted me about the appropriate use of artificial intelligence (AI) in an image submitted to our photo library. He was considering whether the photograph of a mother and her baby could be ‘improved’ before being added to our collection.
The original image was fine, but Photoshop’s latest version allowed the use of generative AI.
Gary was able to construct the mother’s missing hair and the background scenery. All done from his desk, thousands of miles away from where the photograph was originally taken.
Gary’s question to me was, is it appropriate to include images that have used generative AI in our photo library?
The use of AI is not confined to working with photographs. Taking a little time, I asked ChatGPT to, “Provide a template fundraising letter for the British Sign Language Bible translation team, needing to raise £10,000 by November 2023 in order to complete the translation of the Old Testament by December 2024”.
Within 30 seconds, I had a 500-word fundraising letter that would work quite nicely for most audiences. I doubt that many people would be able to pick up on the fact that it was completely generated by AI.
Encouraged by this, I tested ChatGPT in a few other areas. I asked for an encouraging article about Bible translation in Burkina Faso, for some social media posts that could be used in fundraising, and I even asked for steps necessary to plan an anniversary celebration for an organisation.
Within a few moments, I had answers to each question. The style was a bit different to what I am familiar with but it wouldn’t have taken a lot of editing to tailor each response to the audience I had in mind. ChatGPT produced better-than-average first drafts.
Gary continued to provide example images, showing what was possible with Photoshop. He shared images of completely fictitious towns, scenery and even people, from AI generators.
While AI generated some quick solutions, there are a few issues that aren’t so easy to resolve.
In response to my request for an article about Bible translation in Burkina Faso, ChatGPT wrote the sentence, “This beautiful country is home to over 60 different ethnic groups, each with its unique language and culture.” What it didn’t do is give me a reference source for the 60 different ethnic groups. From my general knowledge (I have visited Burkina Faso twice) the number sounds correct, but I’m not happy to start publishing details like this without being able to certify they are correct.
My concern is, if I use AI frequently, how long will it be before I stop picking up on these details and just accept the answers generated? I assume that it is not too long before it becomes like following a sat-nav and getting stuck in a narrow alley.
Also, what happens to the data I submit to the AI? If I upload the responses to a survey for it to analyse, to whom am I giving that data? Where will it be stored and how will it be used? Or, if I want a sensitive image adjusted so that it becomes anonymous, what will happen to the original image that was supplied?
I have spoken to several leaders who are working on AI policies for their organisations. What they are attempting to produce seems sensible and is being created with a desire to ensure that AI can be used as a help to organisations. Yet, what seems to be harder to pin down is how our values are expressed in the use of AI.
I think, as with most of the things we do, we can be practical and sensible in the limits we try to apply to our use of certain tools.
For example, the internet is a valuable tool, but it can also be a minefield of endless distractions and unhealthy content. In the context of AI, I think the key decision is what our use says about us — in effect, how our values are reflected in the way that we use AI.
In the back of my mind, I have those words of Paul from 1 Corinthians 10 echoing in my head, ““I have the right to do anything,” you say—but not everything is beneficial. “I have the right to do anything”—but not everything is constructive.” (1 Corinthians 10:23 NIV)
Where do we draw the line with AI?
When Gary asked about using AI to generate parts of a missing photograph, I had a conversation with some of my team about whether we thought this was a good idea. The conclusion that we reached is we want people to know that our photographs are a true representation of a particular time and place. The woman in the photograph may indeed have a forehead that has been cropped out of the image, but we decided that it’s not for a computer to determine what that forehead looks like.
For our team and the photos we share, using AI to improve the quality of an image is OK, but we decided we will not generate elements that haven’t been seen by the lens.
Of course, for some organisations this will still leave questions. What about those who work in parts of the world where it’s not possible to show staff working with a community or to reveal the place in which they work? Could generative AI provide an answer? Could it create representative images of towns or people at work in a way that can illustrate what the organisation does, as long as these are clearly marked as such? Or would that line between reality and fabrication become more and more blurred?
Our team’s decision may not apply to everyone, but we all do need to think about these issues, and soon. What do they say about us and our organisations? And how do we best communicate these decisions with those who partner with us?