Artificial Intelligence
A message from PC(USA) General Counsel Mike Kirk
Sophisticated hackers are using Artificial Intelligence (AI) to deceive employees with deep fakes and email phishing scams. While AI is a useful tool to support work, it also offers bad actors a tool to advance their schemes to gain access to personally identifiable information, such as social security numbers and bank account information, and to try to persuade employees to transfer money to accounts controlled by the hackers.
Hackers are using phishing and social engineering to fabricate stories and situations to make employees feel uncomfortable and to pressure them to react and act quickly without checking to see if the email or deep fake video is legitimate. Below are some examples used in these email schemes:
- There is a problem with your account; please click on this link to correct.
- Attached is an invoice that must be paid immediately or we will cancel your account, and failure to pay will impact your credit rating.
- We are a government agency, and you have failed to act. If you do not respond, you will face legal consequences.
- Hi. It’s your sister. We are on vacation and we were robbed of all of our money and credit cards. Please immediately send us money at the link below.
Sometimes employees are asked to open an attachment; often, they are asked to click on a link. When they do, there is a likelihood that malware or ransomware enters your office system and destroys data and even entire systems. Train your employees not to fall for phony emails or deep fake videos displayed through a meeting platform such as Zoom.
How to protect yourself and your organization:
- Train your employees to be cautious of emails that ask for personal information from them or about their co-workers, your donors and members, or that ask them to send money.
- Train your employees to hover their cursor over the email address of the sender. If it does not match the email address that belongs to who the sender says they are or appears suspicious in any way, they should not respond to the email, or click on any links or attachments. DELETE IT!
- They should contact the person who allegedly sent the email asking for personally identifiable information or money and ask, did you send me the email or video?
- Implement AI guidelines to encourage your employees to use AI responsibly. The most important guideline is not to download personally identifiable information of your employees, members, donors, or your organization into a public facing AI platform, such as Chat GPT. That information then becomes public and anyone can view it and use it. Also, employees should not record meetings that include confidential or personally identifiable information and dump the recording into a public AI platform to transcribe notes of the meeting. Again, it becomes public.
- The Presbyterian Church (U.S.A.), A Corporation website has a set of guidelines implemented at the national offices which you are welcome to review and borrow from to implement your own guidelines at your congregation or mid council. Employees have likely already started looking at AI platforms like Chat GPT to see how it works and how it can be used in their work. We all need to try to stay one step ahead in the safe use of AI to benefit our work and ministries.
The A Corp website has resources you might want to consult on other issues, as well.
Recent Comments