Agent software comprehension : explaining agent behavior
MetadataShow full item record
It is important for designers, developers, and end-users to comprehend (or explain) why a software agent acts in a particular way when situated in its operating environment. Comprehending agent behaviors in an agent-based system is a challenging task due to environmental uncertainty and the dynamics and multitude of agent interactions, which must be captured, processed, and analyzed by the human user. While traditional software comprehension answers "what is happening in the implementation?", this research takes a step further to facilitate comprehension by answering "why is the behavior happening in the implementation?". To explain agent behaviors in the implemented system, this research takes the model-checking approach for representing abstracted software behavior and the reverse engineering approach for verifying the expected behavior model against the implementation's actual behavior, while assimilating the terminology and framework from abductive reasoning. This research empirically shows that maintaining accurate background knowledge of how the implementation is expected to behave is crucial in generating accurate explanations of agent behavior. The resulting Tracing Method and accompanying Tracer Tool build on ideas from existing approaches and extend the state-of-the-art to better assist human users (of various skill levels) in comprehending agent-based software by automating many reasoning tasks. The Tracing Method is applied to two domains to demonstrate the capabilities of the Tracer Tool in (1) suggesting background knowledge updates, (2) interpreting actual behaviors from implementation executions, and (3) explaining observed agent behaviors. This research aims to help designers who want to improve agent behavior; developers who need to debug and verify agent behavior; and end-users who want to comprehend agent behaviors.