An overview of advertising codes and standards, AI cases studies and the Ethical Principles for AI Marketing.
This playbook aims to explore Artificial Intelligence marketing ethics for both current and future uses of marketing AI.
Before you read this, we recommend reading our playbook AI Marketing Basics where we cover the definition of AI, the different types of AI, use cases in marketing and current AI platforms.
Marketing ethics, codes and standards
Marketers and advertisers have local standards and codes that determine advertising practices. There are many different advertising standards which can include codes for ethics, advertising to children, environmental claims, alcohol, gaming and others. In addition to codes and standards, marketing is also subject to other laws including privacy and copyright law.
These are some of the existing industry bodies for marketing ethics, codes and advertising standards in the USA, Canada, UK and Australia:
- Federal Trade Commission (USA)
- Australian Association of National Advertisers (Australia)
- Ad Standards (Australia)
- Ad Standards (Canada)
- Advertising Standards Association (UK)
Currently these codes and standards apply even if AI marketing is used, however there are no specific codes and standards when it comes to the operations of artificial intelligence marketing and the new scenarios that occur when using AI.
Business, marketing and martech professionals all have a responsibility to develop new codes and standards for AI marketing.
AI Marketing Cases
There are some known ethical cases when using AI marketing and we are likely to see more cases as the technology advances and new applications are used.
Privacy and consent
AI models work by 1. using input data, 2. processing that data, and then 3. creating an output. It's often the processing within AI models that can cause privacy issues. These include identifying users through de-identifiable data, de-aggregating user data, using PII data for marketing communications, replicating user data across multiple models and incorrect applications of consent.
Here are a few examples of what can go wrong with AI marketing within privacy:
- An AI model is used to create personalised offers, but the AI model and the processed data is saved in another database outside the organisation.
- An AI chatbot provides personalised responses to customers who have not given consent for any data personalisation.
- An AI model designed to match anonymous users with another database also identifies the user names.
- Generative AI is set up to create personalised imagery and starts using personal information in the generated image.
Both AI modellers and privacy professionals are still learning privacy processes to ensure the development, processing and output of AI achieves the business purpose without breaching any privacy laws and negatively impacting customers.
Copyright and source data
Generative AI uses source data and materials to generate copy, imagery, code and other types of content. This raises complexity when the source data may include copyright materials such as photography, art, music, writing and other works.
However AI is not covered within international laws and the Copyright Act 1968 (Cth), nor has there been many precedents. Copyright law also only recognises humans as authors not artificial intelligence machine learning.
Currently AI does not have any rights under copyright law and there is no legal obligation to indicate AI was used to generate the work. AI products in the future may need to disclose whether copyright artwork is used in their source data, or even be restricted to only using royalty free source data.
Marketers who use generative AI software may not be aware if their generated image infringes any copyright. Consider the use of AI software to another design product (e.g. Photoshop), the responsibility of the final use of the image will fall onto the end-user (the marketer).
Unfair machine learning predictions and discriminatory pattern recognition
One of AI's strengths is the ability of pattern recognition. However while these patterns can be extremely useful, patterns of certain types of data can be inaccurate and even discriminatory.
For example, an AI model is designed to predict who will default on their mortgage and it scores males lower (less likely to default) than females. While the source data input in the model may actually show this, this can't be used as a reliable indicator as it's a form of discrimination.
Discriminatory profiling can be accidentally created by different user attributes, such as their age, city of birth, and ethnic background. More broadly historical data is not always the best way to predict the future and a problem with using AI generally.
Unsupervised personalised communications
Unlike traditional marketing and advertising, AI marketing has the power to scale creative development and as a result be very unique and personalised to the individual. Whether it's a unique response from a chatbot, personalised copy and imagery, or a customised offer - AI marketing can be catered for individuals rather than mass communications.
Automated generative AI allows marketers to develop creative work at scale but with the risk of less control over what the user receives.
Model design issues
Model design issues include failures and design flaws that occur from designing or using the model.
Some examples of design issues:
- Does not provide any information in the model of its purpose, creator or organisation that owns it.
- Does not have ability to provide information on its source data, processing, or its final output if required.
- The model is unable to be controlled, turned off, or can be replicating.
- The model is designed for malicious purposes such as scamming, phishing, or fraud.
Ensuring that we have consistent codes and standards for model designing will help reduce the amount of AI models that are used for malicious or unintended purposes.
Ethical Principles for AI Marketing
We've created the Ethical principles for AI marketing that aims to help develop codes and standards for AI marketing, as well as assist organisations with using AI.
- Artificial Intelligence values statement
Organisations using AI should write and publicly display their AI policy and values statement. This statement includes what sort of AI practices they use in their operations and marketing.
This values statement is important to communicate to stakeholders and businesses, but also to ensure any AI designers are aware of the values that they work under. Creating effective AI should only be achieved within a strong values framework.
2. Right to privacy
As part of the values statement, organisations should update their privacy policies to include details around AI use and how it uses customer data.
It's recommended that organisations that provide customers with flexible choices and customisation. Customers should always be informed whether they are using AI products, how their data will be used by AI, or if they are conversing with AI bots.
3. Right to transparency
As we've learned with the privacy cases, misuse of data and discriminatory profiling - the processing of data within AI is very important and providing visibility of this decisioning is also important.
Providing detailed information on how AI models are used is important to provide visibility that the organisation is using AI fairly and to enhance the users experience.
Organisations should categorise these models into disclosable and non-disclosable:
A disclosable model is an AI model that the organisation should disclose and provide information on how it's used and the rules it uses to processes data. This model can be unsupervised and makes decisions on behalf of the customer or organisation. Examples of these models could be automated home loan application outcomes, automated online medical diagnosis, automated insurance claims and other sensitive decisions.
A non-disclosable model is an AI model that does not need to be disclosed by the organisation. These models do not make critical decisions and are more for the purpose of operational efficiency. Examples of this could included running data analysis, scheduling communication triggers, and general task automation.
4. Right to equality and fairness
To ensure AI models do not create unfairness, there should be prohibition on secret profiling, unitary scoring, racial profiling, gender profiling, sexual orientation and other types of discriminatory profiling.
Prediction models should use exclusionary criteria that prevent the model from creating logic using this type of data.
5. Generative AI reviews and feedback
Generative AI is content that is developed such as imagery, copy, music, or code. An organisation should create quality assurance processes to ensure generated content follows copyright law as well as advertising codes and standards.
The source data used for generative AI should be regularly reviewed to remove any undesirable source data. Depending on the model and its output this could be adult imagery, outdated product information, broken code and so on.
AI models can also have self-optimising capabilities, so ensuring that quality feedback is provided to the model will also improve its overall performance and compliance. Allowing for customers to provide feedback to AI models is another way of providing quality feedback to improve service.
Our history of advertising codes and standards tells the story of how advertising has crossed an ethical line and then a new code, standard or rule is introduced.
Whether these rules have been triggered by advertising to children, encouraging alcoholism, or making fake environmental claims - we often react to bad marketing behaviour and then implement standards that allow for fair communication practices.
There are currently no specific codes and standards when it comes to the operations of artificial intelligence marketing and the new scenarios that occur when using AI.
These AI marketing standards should be implemented before an ethical standard is broken. Pro-activity with marketing standards also allows marketers to help form the codes and standards rather than simply be subject to them.
If your organisation is experimenting with AI marketing, be sure to use the Ethical Principles for AI Marketing to help inform your organisations stakeholders.
Additional resources on AI ethics:
- Marketing AI institute
- CSIRO AI Ethics Framework
- Nuffield Foundation’s roadmap for AI research
- Institute of Electrical and Electronics Engineers
- AI Now 2017 Report
- The One Hundred Year Study on Artificial Intelligence
- The Asilomar AI Principles
- Universal Guidelines for Artificial Intelligence
- The Partnership on AI
- The Ethics of Artificial Intelligence
- The AI Initiative