Human augmentation explores our collaboration with technology to enhance physical or cognitive abilities. However, the ambivalence of AI for humans becomes evident when we consider its differing perceptions in sciences and humanities. In scientific circles, AI is viewed as groundbreaking and revolutionary, largely because of the opportunities and advances it provides in research. This positive, transhumanist perspective emphasizes that humans should use technology to develop enhanced abilities beyond their natural limits. Consequently, the goal in technology development is to create "superpowers" that can significantly expand our individual potential.
In the field of humanities, there is concern about the impact of AI on traditional professional fields. The humanities’ engagement with AI reflects the relationship between humans and technology and questions how this collaboration can be improved. Professional writers and editors fear that automation threatens their role in content generation and social media communication. The younger generation of language studies worries about their professional prospects in the light of these developments.
The disparity of these perspectives calls for a nuanced examination of the impacts of AI. It is essential to address concerns in various fields while simultaneously recognizing and promoting the positive potential of AI.
The risks of using AI
One crucial risk is the apparent lack of accountability as a consequence of AI deployment. Where no accountability can be established, the impacts on societies worldwide could be significant. Questions about transparency, the boundary between truth and falsification, ethical responsibility, and accountability are central here. How can we ensure that information is accurately conveyed, and the user learns to discern right from wrong, especially when they lack experience or context?
Governments and regulatory bodies have developed different approaches: The European AI Act emphasizes a risk-prevention approach, requiring developers and companies to disclose information about the risks of AI products. The UK's AI Action Plan stresses clear communication of the design of AI technologies to promote innovation. The US Executive Order focuses on data privacy, civil rights, consumer protection, innovation, and taking responsibility to ensure the safety and trustworthiness of AI technologies.
Ensuring safety and trustworthiness raises the question of how to promote an understanding of both, what the technology is capable of and how it works. Trust requires communication, and technical communication plays a crucial role in the development and communication process of AI. If technical communication does not actively intervene in the development and communication of AI, numerous opportunities will be missed to make valuable contributions to AI development. Therefore, technical communication urgently needs to increase its involvement in the machine learning development cycle.
How technical communication can help
Technical communication provides clear, consistent and factual information. Technical communicators translate complex ideas into simple language. Customizing content for specific audiences ensures that the content is understandable for different users. The documentation development lifecycle plays a central role here, starting with planning and analyzing the target audience, followed by developing content using various tools and technologies. This is where the diverse contributions of technical communication in the development and use of AI and machine learning come in.
Technical communication for AI documentation
An important area where technical communication can contribute is in the documentation of AI, particularly addressing the complexity of “black box” ideas. Access and transparency of these complex ideas is important, both for users and regulatory authorities. For example, bias in AI generated content is a major concern. The discussion of how to avoid bias began in 2018 with the idea of documenting the technology development process in detail. This essentially means creating developer documentation for machine learning and AI technologies. All aspects should be captured in model cards, including training data, their composition, excluded elements, and the relevant user group. The aim is to comprehensively document the entire development process of the machine learning model.
However, the reality of documentation reveals a discrepancy. Despite the rapid advancement of technology, there is a scarcity of scientific literature on AI documentation. Research literature broadly falls into three main areas: user requirements, standard development, and evaluation. Technical communication has long been engaged with these topics. However, AI developers and researchers are often unaware that this technical communication research already exists. Therefore, it is now time for technical communicators to actively engage in these discussions – not only ones regarding documentation as a byproduct, but also those concerning the development of AI technologies.
Technical communication for prompt optimization
The second area where technical communication can actively contribute is in rhetorical prompt engineering. This involves determining how to instruct a specific generative AI tool with a prompt to achieve the best results. Although there has been research on standards in prompt formulation for some time, there is no established formula that guarantees effective output, and this is because to-date, those addressing these issues do not come from the field of technical communication.
The rhetorical situation, a theory from the 1960s by Bitser et al., states that the more specific or accurate one understands a communication situation, the better. Therefore, a comprehensive understanding of the current situation or problem is crucial to finding an appropriate response. This insight can be directly applied to create a formula for prompt engineering.
In an AI request, the user submits the input text, the prompt, followed by various steps leading to the output text. Due to the way machines learn, contextual embedding is crucial. Unlike human learning, where stories about situations, events, and people play a role, the machine learns by looking at word sequences or sentences that are close to each other. During the initial training of a Large Language Model (LLM) such as ChatGPT, the machine classifies words by collecting and categorizing publicly available internet content. When a request is made, the machine tries to determine which group the content falls into and then extracts relevant words and sentences. The problem with such classification systems is that there is no coherent story associated with the content. This is where human augmentation comes in to effectively tell the machine, "Let me work with you so that you can give me the most accurate output."
When generating prompts, it is important to consider how the machine understands them. The more complete the context of a prompt, the better. Therefore, it should include rhetorical components such as: purpose, need, topic, author, and audience. Here, technical writers can leverage their knowledge of rhetorical situations to tailor information delivery for prompt engineering and develop prompt formulas for various conditions and cases.
Technical communication for content development
The programmatic approach in teaching technical communication deals with curricular topics such as grammar, punctuation, and word selection for clarity and conciseness – areas that AI can handle well. In contrast, the rhetorical approach not only considers explicit words, but also implicit contexts. For example, instead of using the term "race," the word choice "ethnicity" should be preferred, and in a software context, although it is commonly used, the word "kill" should be avoided as it evokes associations with warfare.
There is a need for more control over the process of contextual embedding of words. Technical communication can contribute by helping to clean up data – a supervised machine learning process. Although developers can recognize problematic word pairings, teams often lack professional experience and knowledge in content design and the model development process. Ethical editing and framing can be used for data preprocessing, fine-tuning, and content evaluation after the model's release. In this regard, technical communication plays a key role in ensuring the application of ethical principles and the qualitative improvement of the model.
Further areas of engagement
Technical communication can leverage implicit contributions, i.e., non-visible interactions of users, to enhance contributions. In the software and documentation industry, technical communication uses infrastructures like GitHub to gain insights into the backend of documentation platforms and understand user issues. Similar processes to understand how users use generated content and to what extent it is successful are lacking for AI tools. Technical communication can contribute because we have mechanisms to understand user needs, improve products, and communicate effectively with users.
Another crucial aspect is equality and social justice. Technical communication traditionally advocates for user concerns and understands their needs and product usage. Technical communicators can contribute to creating diverse testing environments to ensure that machine learning models receive data from various relevant user groups. This contributes to more equitable and socially just technologies.
This article summarizes the presentation by Dr. Nupoor Ranade at the IUNTC meeting on February 29, 2024.
Find out more about the International University Network in Technical Communication and upcoming events here.