AIAP(AI Agent Platform)

Lead Product designer/UX researcher
Project Overview
Despite advancements in large language models (LLMs) leading many services to claim that AI is now accessible to all, most current AI development platforms still require substantial effort and high technical expertise. This makes it challenging for non-experts to utilize AI in their everyday lives and workplaces. To address this issue, we conducted a formative study to identify user needs and derived design implications for a no-code AI tool. Leveraging these insights, we introduce 𝐴𝐼𝐴𝑃, a no-code platform that combines natural language input with visual programming to streamline AI service development. 𝐴𝐼𝐴𝑃 refines user prompts through its AI Suggestion feature and constructs workflows using modular, unidirectional steps. Our user study with 22 participants demonstrated that 𝐴𝐼𝐴𝑃 reduces workload and enhances overall usability compared to an existing tool, while accommodating a wide range of user preferences and needs. We conclude by discussing the implications of LLM-based service development.
My Contributions
I actively participated in the early stages of this project, focusing on product design and UX research. Based on these efforts, I wrote a paper for submission to CHI 2025, in which I am listed as the first author.
This project has been documented and submitted as a paper to CHI 2025, the most prestigious conference in the UX/HCI field. I designed and evaluated the AIAP tool, and provided insights on the future direction of no-code visual programming tools.
Duration
Rollout period: 5 monthes
Apr 2024 β€” Jul 2024 Design and user research
Jul 2024 β€” Sep 2024 Writing

Fig 1. A builder page for creating AI services with an example screen for Task 1 in a user study:(a) A section for entering desired instructions. Once input is provided, an $AI suggestion$ appears to interpret the prompt (a-1). Since data definition is required, it is labeled as Recorded contents. (b) A unidirectional modular step system where user inputs are accumulated step-by-step, and the service is executed in sequence.(c) The interpreted prompt. Data connections are displayed as capsules, with the action section highlighted in bold. When data is connected, it appears as AIAP_instruction.mp3. (d) The data field, which allows for the registration of files, URLs, and databases, and connects them to the prompt. (e) The action field, which automatically identifies actions from the prompt and displays related functionalities or APIs for automatic linking. (f) A menu to switch between the Service tab and the Schedule tab. (Task 1 of the User Study)}
Β  Β  Β \Description{Builder page for AI service creation: a modular system with user input-based interpreted prompts, data connections, step-by-step service execution, file/URL/database registration, and automated function linking.

Fig. 2. Example of results from the visual programming session. A flow for an English learning service created using post-it notes by a
non-developer and a developer. Non-developers generally find it more challenging to configure data or organize the overall flow
compared to developers. (a) Result from ND2,Non-developers outline the service flow simply, without focusing on implementation or
structure. (b) Result from D2. Developers break it down by function and provide detailed descriptions of the data and logic.

Fig. 3. Prompt Input and Data Connection: (a) When a prompt is entered, AI Suggest organizes the sentence and highlights Indicate
in bold. Undefined data is displayed as list of websites . (b) Add the file user want to connect in the data field. (c) When the
undefined data is selected and connected, it is displayed as image_link.xlsx .

Fig. 4. Adding and Modifying Prompts. (a) Add prompt. (b) Delete. (c) Add new prompt

Fig. 5. A builder page for creating AI services with an example screen for Task 3 in a user study:(a) The functions and APIs required
to execute the prompt have been added. (b) When the ’Execute’ button is pressed, the task is performed as shown in b-1. In this
task, the result of analyzing the list is sent via email. When ’Publish’ is clicked, as shown in b-2, it analyzes the entire workflow to
automatically generate the category, title, and description, so the user doesn’t need to input them separately.

Fig. 6. The Wilcoxon Signed-Rank Test results for NASA-TLX showed that 𝐴𝐼𝐴𝑃 demonstrated significantly meaningful improvements
over the Baseline across all items for Tasks 1, 2, and 3. A lower score indicates a more positive evaluation. βˆ— βˆ— βˆ—, βˆ—βˆ—, and * represent
significance levels at 0.0001 < 𝑝 < 0.001, 𝑝 < 0.01, and 𝑝 < 0.05, respectively.