Autonomous Multi-Criteria Decision-Making in Ambient Intelligence

Size: px
Start display at page:

Download "Autonomous Multi-Criteria Decision-Making in Ambient Intelligence"

Transcription

1 Autonomous Multi-Criteria Decision-Making in Ambient Intelligence Petr Tucnik Department of Information Technologies University of Hradec Kralove, Czech Republic

2 1 Introduction The multi-criterial decision making (MCDM) is an approach typically used in managerial and planning decisions or for decision support purposes. It allows mathematically precise consideration of a large number of factors before decision is made. This work is focused on use of such approach in autonomous entities (agents) which requires some modifications to be made. Standard MCDM is using pre-prepared data while agent is acting autonomously and all information needed for making decision must be in a correct format in order to be processed mechanically (by software in computer). This somehow changes the perspective of MCDM theory as it is normally used because it is not primarily intended to be applied in autonomous decision making or autonomous systems. The intended use of MCDM in agent decision making processes is to function as a control mechanism for high-level decisions. Similarly to company practice where MCDM is used by management, previous experience with application in soccer robotics (Tucnik, 2007) and other areas has shown maximum effectiveness of MCDM approach in high-level decision making and reasoning. Although, in principle, application to lower-levels of behavior is possible (movement, pathfinding, manipulation with objects, etc.), other methods (artificial neural networks, genetic algorithms, tools of theory of control, etc.) are often more suitable or more effective. Therefore, this work will be focused on application of MCDM in high-level decision making. This is the level where strategic decisions and planning are made. Proposed approach was already used (with certain modifications) in area of soccer robotics (Tucnik, 2007). But as our domain or interest, area of ambient intelligence will be intended for future application of MCDM as its control mechanism. The notion of ambient intelligence (AmI) refers (Augusto & McCullagh, 2007, Cook & Das, 2004) to a digital environment that proactively but sensibly supports people in their everyday efforts. Historically, this approach is extension of area of artificial intelligence (AI) and many of its principles were built upon both its theoretical and practical basis. In pursuit of achieving state of intelligent environment (i.e. environment with presence of elements that exhibits ambient intelligence in some way), it is often both necessary and useful to use ideas from AI area. The agent paradigm will be most useful concept for our purposes here. The most obvious advantages of MCDM application arise from its mathematical basis. Possibly most important advantage lies in possibility to include large number of factors (theoretically unlimited) into decision making process. As a consequence, even very complex decision making situation may be processed and reasonable decisions obtained given several design rules are followed. Other advantages are: adjustable level of abstraction of task environment representation, speed of decision making and modularity of design and others. These matters will be explained in more detail in following text. 2 Autonomous Approach Shift to autonomy with use of MCDM (autonomous decision making is not a standard use of this approach) requires change of perspective in perception of both agent and environment. In this part, more precise specification of these terms will be offered which will clarify the perspective of all related aspects introduced in the rest of this chapter.

3 2.1 Agent The intended use of MCDM approach is in autonomous entities which are in the artificial intelligence called agents. As it is a good practice to start with basics, we will use following definition of an agent (adopted from (Russell & Norvig, 2009)): A (computer) agent is something that acts and it is expected to operate autonomously, perceive its environment, persist over a prolonged period of time, adapt to change, and create and pursue goals. Furthermore, this conception may be extended to cover the perspective of efficiency of behavior (again (Russell & Norvig, 2009)): A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. The first definition quite accurately covers all important features of agents and they will be perceived as such in the following text. Such concept is easily applicable in area of ambient intelligence (AmI). For our purposes now, objects in (ambient) environment capable of any type of action will be perceived as agents and - as presumed - will be capable of functioning on their own and following respective internal routines, procedures or algorithms. Other aspect, often mentioned in relation to agent s behavior, is proactivity. Proactivity means agent is capable of acting on its own, without direct stimulus from the surrounding environment. There are several basic agent architectures which differ in complexity of features they are capable to use in decision making, ranging from environment representation (deliberative agents) to social behavior (social agents) or any of its combinations (hybrid agents). The proactivity is a feature of more complex agent architectures; simplest case of agent which is purely reactive (responses to external stimuli only) and without internal representation of surrounding world is not considered here. It is not useful in AmIE because environment perception plays essential role in decision making with this method of control. In the second definition mentioned at the beginning (rational agent), there is a notion of rationality mentioned in connection to the architecture of agents. It is the fundamental aim of MCDM principle to find most effective or optimal solution to the every given decision-making situation, and therefore it is a status of rational agent we would like to achieve in our agents. It is matter of reasonable and correct design to achieve this. Decisions are only as good as information they are based upon. Sub-optimal performance may not be caused by wrong decision-making method but rather insufficient information. Rationality of behavior is a complex problem which could extend our text to areas of cognitive science (as well as other domains). As it is not purpose of this text to focus on this issue in detail, definition mentioned above will be considered sufficient for our purposes at this moment. Reader should bear in mind there are more ideas hidden in the notion of rational agent than will be discussed here, for more information on related topics, see (Pfeifer & Scheier, 2001). 2.2 Agent Definition From the perspective of MCDM approach, every piece of important information has to be included into model of the system. As the agent is inseparable part of the environment, it has to be described in a similar way as the environment surrounding it. Before environment itself, an agent will be defined in more detail, formally. Agent is defined as a 3-tuple: AGENT = {P, G, V}, (1) where AGENT is self-explanatory, P is the set of perceptive functions (each sensor has one perceptive function and agent may have more than one sensor), G is the set of goals, and V is the set of actions agent is able to perform.

4 According to definition (1), an agent consists of three important parts determining what it is able to sense (perceive), what does it want to do, and what it is capable of doing. These parts will be discussed in more detail here, but altogether they form a cycle called sense-think-act cycle in literature. This is shown in the Figure 1. This loop allows agent to perceive its environment, decide and act. Cycle is theoretically infinite. This is a universal concept describing agent`s behavior. But the fact has to be pointed out that the cyclical behavior shown in the Figure 1 is intended for agents which have all the components necessary to be autonomous on their own. Agents in our perspective are independent on the user, not individually on each other. In principle, this is not an issue, because the multiagent system is intended to be cooperative. Figure 1: Sense-think-act cycle of an agent (adopted from: (Russell & Norvig, 2009) Agent s Perception The perception plays crucial role in agent`s design because it directly constrains agent`s functionality. The importance of context when providing services in described in (Bottaro et. al., 2007). An agent has to be able to perceive at least impact of its own actions (for formal specification, see below) in order to function properly and optimize its behavior towards some ideal state, whichever it may be. Without this feedback, loop action-impact-correction-action will never be created. Machine learning methods may not be successfully implemented in such systems. Keep in mind that it is unsupervised learning that is to be considered here (see (Russell & Norvig, 2009) for more detailed description of machine learning techniques). Supervised learning paradigm is not suitable because AmI system should be able to adapt to user`s needs as these occur and this cannot be made in advance (because it is unknown). Also, the real world environment is the intended application environment here and as such, it is more difficult to prepare agents beforehand for their tasks. Supervised learning is typically used in limited, well-structured problem domains, preferably with deterministic nature. This is distant from the environment of ambient intelligence. Our ideal agent is capable at least of perceiving impact of its own actions. There are two fundamental sources of information available to an agent, regardless of its construction, tasks, decision making algorithm, etc. These sources are either internal (agent`s sensors) or external. From decision-making perspective, there is no difference (at least for purpose of this text there will be none) between information scanned from the environment directly by an agent`s sensors and information obtained from the outside, e.g. from other agents, operator, database, etc. However, external information may always be flawed by unintentional or intentional error and both these cases are out of agent`s control.

5 The information verification is quite difficult to design and implement. Especially in multi-agent systems where antagonism exists between agents, disinformation may occur frequently. This is typically not the case of multi-agent systems in environment with ambient intelligence, but it should be noted that this poses some level of risk (e.g. in case of security attack, etc.). Agents in AmIE are typically cooperating with each other, providing information freely and correctly (as much as they are capable). The construction of agents in AmIE is very often uncomplicated and quite simple. Active objects (i.e. capable of actions) in the environment are taken as agents and they are very frequently severely limited in number of activities they are capable of doing. This is derived from the limited number of tasks they are intended to do, at the first place. One probably won`t expect a coffee maker to move freely around apartment or a vacuum cleaner to bring newspaper. It is a matter of common sense design where purpose defines set of actions for every agent present in the system. If we establish term of a complete agent as an agent capable of sensing, decision making, performing actions and moving, situated in adequate environment, suitably embodied - then it is most probably not a typical AmIE agent. The typical case of agent in AmIE is agent incapable of perception by itself (or have only limited equipment allowing it to perceive its environment), immovable, with limited actuator capabilities or possibly even combination of some of these limitations. Most likely, AmI multi-agent system will be highlighting mutual cooperation, free exchange of information and emergence of desired behavior from interactions between agents instead of their full functionality. Social dimension of agent interactions has to fill the functionality gap resulting from simplicity of design. In other words, specialization and information sharing is supposed to compensate missing full functionality in AmIE system`s agents. Special cases of agents may often be present in the AmIE: action-agents capable of performing actions but without any sensor equipment or sensor-agents capable of perception but incapable of actions. These extremes, in fact, do not establish need to modify theoretical structure as it is presented here because whole multi-agent system is presumed to be working together in a coordinated, cooperative way. Information is shared freely here. Set P (from Eq. 1) contains perceptive functions used by agent. At least one sensor must be present in the set P in order to get the necessary information from the environment to the agent. This can be extended to sensor agents as well they substitute sensors directly attached to agents performing actions. If the agent has no sensor apparatus on its own, it has to be connected to at least one sensor input provided by the sensor agent. Case of only one sensor is a minimal case here. In most cases, a combination of several sensors (as sources of external information) will be very probably used in real world applications. The perceptive functions p from the set P transform scanned data about external world W into attribute representation E of the environment. It can be formally written in a formula: W ρ A X, X E. (2) The perceptive function p A transforms part of the world into attributes belonging to E. We can also describe perception alternatively: ρ A (W ) = X, X E. (3) Eqs. (2) and (3) have the same meaning, but in some cases, one form of notation may be more useful or transparent than the other. This is the reason why both possibilities are presented here. The X in Eqs. (2) and (3) represents a limited part of the environment within the scanning range of the sensor with perceptive function p A. Scanning range of any single sensor usually does not cover the

6 whole environment (even if this was the case, the theoretical description has to be universally valid); therefore X is a subset of E. Area which can be influenced by agent s actions can be formally described as: ν A (Y ),Y E (4) Y is part of environment modifiable by agent s actuators. It is a matter of good design to ensure that for the following formula holds: ρ A (ν A (Y )),where Y X E, (5) i.e., the modifiable part is perceptible by the agent. Agent must be able to perceive effects of its actions, for more details see (Russell & Norvig, 2009) and (Tucnik, 2007). The condition in Eq. (5) allows the agent to learn from its own actions and, as a consequence, it will allow implementation of machine learning method of designer s choice. The Eq. (5) also represents minimal range of sensors. The range of sensors will be very probably wider than that, but this is a minimal case. The influence of exogenous factors cannot be always prevented (in the learning process), but this is a common problem in all real world applications and it does not matter which architecture is used. The perception of agent`s actions has to be available at least to some reasonable level of accuracy in order for agent to learn. This is a matter of proper design and preliminary testing to be done correctly when implementing sensors. For every attribute a in the environment, the following formula holds: A E C : ρ A (X), X E. (6) The whole environment (apart from calculated attributes which are not obtained via sensors, see below) must be perceptible. It may be useful at this point to mention again that level of abstraction is chosen by designer of the system in design phase and environment description covers all that is needed for the multiagent system to function properly. As a result of formula (6), the entire environment (in a form of attributes) is perceptible by the multiagent system. Individual perceptive functions of agents may overlap (this means they are scanning/perceiving the same attributes). For our case, we will assume that same attributes scanned by different sensors will have the exactly same values (we will neglect the case of wrong sensor calibration or noise effect here). Scanning of the environment is assumed to be continuous (real-time) or done in short time intervals. Although term real-time is somehow vague, we will simply assume that when something new happens in the environment, there should be no significant delay between the occurrence of such event and obtaining this information via sensors. Normally, the agent acts in repeating cycles (as was indicated in the Figure 1) and there is often some time-consuming task involved in this cycle every time it runs. This task (e.g. video recognition and processing which is typically very time-consuming, etc.) defines one cycle of an agent. Its length varies from very short time intervals (like sec, even less) in simple systems to longer delays (like 1 sec). Agent has generally better reflexes if its cycle`s length is short, like in the case of robotic soccer (where faster team usually wins). In case of AmIE, expectations for quick reactions are not so high and even slower actions are sufficient Agent s Goals Second part of agent`s definition from Eq. (1) is the set of goals G. This part of agent is closely related to the notion of rationality of behavior, as it has been already mentioned in the introduction. When making

7 decision, it is necessary to differentiate possible actions and their outcomes, to recognize bad and good solutions. Goals provide measure to differentiate. Those actions that bring agent closer to goal state are preferred above others. Idea is to allow continuous improvement of functionality. This is corresponding with general requirements of agent`s learning component which are listed in (Russell & Norvig, 2009): Noise immunity learning component of an agent should be able to work with filtered noised signal from sensors. Fast convergence (to optimal values). On-line learning agent is capable of learning while performing its task. Incremental learning learning process is continuous. Tractability learning has to be computable in real-time. Groundedness solution is based on information from the environment only, or information obtained during time. Learning process is realized from the perspective of agent. In real world applications, it is often complicated to express explicitly desired goal-state of the environment, even in case of a single criterion used. It is simple to imagine situations where the explicit interpretation may be used: e.g. in task (A): Bring 10 items X from area A to area B. This is easy to be interpreted in description of a goal state (10 items X in area B). Other ideal solutions may not be so easy to describe or find. Consider this task (B): Move from A to B with minimal energy consumption. When both scenarios are considered to be taking place in real world, environment will be stochastic and nondeterministic and many things may happen during the task processing. It would be generally easier to work with task (A), as it is better for measuring progress, than task (B). In scenario (B), an ideal solution will obviously be one which does not require energy consumption at all, and this is not possible. Therefore, the goal state is unreachable in this case solutions will be considered more efficient (better) when they will be near the goal state, but none of them could meet the precise definition of ideal state. In case of (B) type task, a principle of maximization is used, see (Ramik, 1999) we always try to maximize the attribute, not to minimize it. In fact, this is just matter of used perspective instead of consumption, terms for preservation or saving are used instead, etc. This reformulation makes whole work easier as we do not have to differentiate positive or negative aspects of solutions all are always positive. For goal definition, other approaches may also be applicable and worth considering. E.g. one of possible ways to prepare problem definition in suitable form is use of preferences in goal specification, see (Ennaceur et al., 2011, Fürnkranz & Hüllermeier, 2010). This would require some modifications in formal description, but otherwise these are valid, applicable approaches. Since this goes beyond the scope of this text, we will not discuss these matters here any further Agent s Actions The last part of agent`s definition (1) is the set V of possible actions. This reflects ability of agent to influence environment. Actuator set should be designed in a way which allows agent to pursue its goals. In case of standard (hardware) agents, actuators are mechanical arms, movement apparatus, servos etc. Given the design limitations of AmIE agents, as were described above, i.e. they are typically not complete agents, actuator set is typically closely related to specific agent`s function, according to its specialization.

8 There exists a dependency between agent`s perception and actuator functions. Agent is supposed to be able to perceive direct impact of its own actions. In practice, this means that immediate change in environment, emerged as a result of agent`s action, is to be noticed. It is the prerequisite of machine learning and continuous improvement of agent`s performance, i.e. adaptation to changes in environment. But realworld problem domains often come with presence of exogenous factors as the model of the world is created by/for an agent. Therefore, accuracy of environment perception decreases over time, as these exogenous factors increase their impact over time. Situation of agent in context of its action is captured in the Figure 2. Agent is able to make changes in its environment but, in general, not the whole impact of its own actions is perceived. Also, it is only a limited part of the real world the agent is able to perceive. This poses no problem, though, as people (as an example of intelligent beings capable of manifesting intelligence) are also limited in their perception (people are unable to see part of color spectrum or hear ultrasound or infrasound, etc.). The rational behavior can be achieved without capability to perceive everything in the environment. It just has to be taken into consideration that part influenced by actions from the Figure 2 should be included in part covered by sensors as much as it is possible, possibly whole. Figure 2: Perception and activity of agent Inaccuracies resulting from exogenous factors cannot be entirely prevented or eliminated from the model. Every real-world application requires certain level of abstraction when describing environment, objects, their mutual dependencies and relations etc. Also, phenomena previously unknown or not anticipated by designer of the model effectively function as exogenous factors. As a result, loop between actions and perception apparatus will contain errors resulting from unknown factors which change state of environment independently on agent and/or agent is unaware of them. As a consequence of above mentioned, agent is capable to perceive actuator impact with reliable precision only partially or over limited period of time. 2.3 Environment From the MCDM perspective, the whole environment (including agent, etc.) is perceived as a set of attributes. Attributes are numerically represented properties of the environment or factors describing

9 agent`s characteristics or condition. Whole description is domain-dependent and is created before agent begins its functioning in the environment. The term attribute is sometimes used by some other authors in different meaning, see (Keeney, 1993, Howard, 2007, von Winterfeld & Edwards, 2007) for comparison. In this case, term attribute will be used to describe environment. The environment has to allow machine processing of information. In order to do so, numerical representation of information (attributes) is required. This may not be obvious, but requirement of numerical representation is a significant constraint for description process. It is easy to come up with examples of adjectives with non-specific meaning (young/old, slightly used, heavy, etc.) and other obstacles related to natural language and designer of the system has to numerically clarify these terms (possibly with help of an expert in the field, if it can be arranged). Fuzzy set theory offers one of possible tools to handle such information capture ; other means are e.g. repertory grids, cluster analysis, designer-defined scales for diversification of term, etc., see (Awad & Ghaziri, 2004). This is, in fact, matter of proper handling of knowledge/information capture. As such, this topic is not closely related to main problem discussed here. When creating an attribute, following questions should be ask: Is the attribute necessary for proper function of an agent? Is the attribute unambiguous? (If yes, can it be renamed?) Is attribute correctly represented? (appropriate scale, units of measurement) Is attribute related to actuator function? Is attribute related to goal definition? Answering these questions helps to consider each attribute carefully and assign its description value accordingly to given circumstances. It is also possible to leave out attributes with little or low importance, i.e. in other words set suitable level of abstraction to model in question. While designing environment description, there are also several advantages to be considered: The problem domain and its description are limited there exists only a final number of important factors needed in model which are difficult to be represented numerically. Most of description would very probably pose no problem. Description is prepared in advance system is not active, therefore description problems do not interfere with control. New components (or pieces of description of model) are introduced into system only after this preparation phase. Level of detail may be determined by designer of the system this is related to the abstraction mentioned above designer may choose the level of detail accordingly to his/her needs. Also, it is often possible to split one attribute to several sub-attributes, creating hierarchy of attributes and making description easier. Similar mechanism can be used to merge attributes together if needed. Formally, the environment E can be defined as follows: E = A F C U, (7)

10 where E represents environment, A is set of agent-related attributes, F is set of environmental factors, C is set of calculated attributes and U are user-related attributes (problem definition). Each part of the environment E will be described in more detail now. Agent-related attributes of the set A are used to describe physical state of agent like remaining energy/fuel, condition of individual parts, situatedness in the environment (see (Russell & Norvig, 2009)), internal states, etc. In this context, situatedness is technical term and expresses the fact that an agent is not an isolated entity but exists in an (adequate) environment. Environmental factors in the set F contains attributes describing environment room size, appliance location, number of persons, temperature, etc. This set of attributes is usually easy to work with. Calculated attributes in the set U contains attributes that are used when needed and are calculated separately. Typically, there is no need to watch all attributes existing in environment all the time. Many of them have to be calculated only when needed. Short-term predictions of position of important elements in environment (persons, agents, objects) can be mentioned as typical example of calculated attributes. Also, when additional information is necessary in order to improve quality of decision, it is possible to run additional procedures, for example enhancing quality of video image recognition, etc. The set C has support role but, as the previous research in this area indicated, see (Tucnik, 2007), it is very important. Large part of information needed for qualified decisions is in the form of attributes of the set C. The last set of user-related attributes U contains information about users and their preferences which helps agent find appropriate actions towards the user. U = U for individual user`s attribute sets U 1, U 2, etc. At least one user must be defined (U is non-empty set). Typically, a preference-neutral default user description is also present in this set, which is useful when dealing with new users in AmIE, etc. For definition of environment E, as it was presented, holds following expressions: A = {a 1,,a k },F = {a k+1,,a l },C = {a l+1,,a m },U = {a m+1,,a n }, (8) E = A F C U = {a 1,,a n }, (9) A φ,f φ,u φ. (10) Set C can be empty (it means no calculated attributes are used), but this is not usual case. Number of n attributes together forms up environment E. 2.4 Data Preparation and Structure Another possible classification of attributes is to differentiate source of their origin (from agent s point of view). From this perspective, three groups of attributes exist in the model: external, internal and combined. External attributes are obtained though scanning the environment around agent by sensors. Internal attributes are related to agent`s problem definition and task information etc. These are information stored inside the agent (in a form of data). Combined attributes are derived from external information but modified by additional agent-specific information layer. Example can be map created by agent. Source of information is important in the means of reliability but in other aspects this differentiation can be neglected. While internal attributes are considered to be completely reliable regarding noise, quality of data transfer, etc., external data are to be captured and scanned by agent`s sensors (or other

11 agent`s sensors). This creates some risk of inaccuracy because of wrong sensor calibration or changing conditions on the outside of the agent`s body for using sensors (changes in temperature, moisture, pressure, EM interference, etc.). For the purpose of this text, an assumption will be made that sensors are always calibrated correctly and obtained data are without any noise. However, in real world application, it is important to take this into consideration when evaluation agent`s performance. All attributes are normalized in order to ensure their mutual comparability. Implication of this is obvious it allows values in different measurement units to be used together. For normalization, following formula is used: norm(a) = (a actual value a min ) / (a max a min ). (11) Boundaries a min and a max are defining range of attribute a, limiting its possible values. After normalization, the range of values is norm(a) 0;1. The normalization procedure is handled by sensor module of respective agent (attributes of the set C are to be normalized as well). Every attribute in the set E is normalized. The normalization procedure will take place before attributes are introduced into decisionmaking process. In practice, the normalization filters all decision-related data, i.e. all attributes. Since one sensor may be scanning several attributes at once (at least one), the situation is similar to one shown in the Figure 3. Figure 3: Scanning attribute from the environment The sensor provides only such information from the environment which it is able to process. However, the range of attributes may be quite wide, depending on the sensor. Also, one agent may be equipped with more than one type of sensor, providing even more information at the same time. In case of cooperative multi-agent system (such as agents in our ambient environment), information may be provided by another agent. When multiple sources (sensors, sensor agents) are used at the same time, situation is similar to the one in the Figure 4.

12 Figure 4: Multiple scanners used by agent. Grid on the right side contains all attributes in the environment. The Figure 4 represents situation when multiple sensors (or sensor agents) are used together to provide necessary information. The grid on the right side represents attributes, even those which are not used for actual decision and therefore eliminated from the decision making procedure for the moment. The normalization procedure, as it was defined by Eq. 11, requires setting of attribute`s boundaries specific value must be chosen for both maximum and minimum of attribute. There is an issue related to this because maximum and minimum may not be known or obvious. Solution is simple: set maximum, resp. minimum, value high enough, resp. low enough, to be sure that they will not be exceeded. However, following situation may occur (see Figure 5). Figure 5: Setting attributes range wrong case (circles represent examples of attribute values) In case shown in the Figure 5, range of attribute is wrong. In all cases (indicated by numbers), value of attribute was always very far from minimum and maximum. After normalization, all values of this attribute will be probably very near to each other, making all but extreme changes unnoticeable. Possible solutions: (A) Increase weight of attribute this will increase impact of normal change in attribute`s value. Risk is in occurrence of extreme value along with high value of weight it will very probably disrupt decision-making process, making system unreliable and unstable, with inconsistentcy in behavior.

13 (B) Adjust minimum and maximum of attribute according to real observation when functioning. Solution (B) is better from the long-term perspective. This requires introducing a testing phase into agent`s lifecycle. When attribute is scanned and used several times, minimum and maximum values are adjusted more accurately. However, during this phase - because attribute`s definition is modified and its values from previous decisions are changed - machine learning may not be used during this time. Modified attribute range is shown in the Figure 6. Figure 6: Setting attributes range adjusted case (circles represent examples of attribute values) Similar solution may be used when maximum and minimum are set to close to each other by accident when attribute is defined. When maximum or minimum is exceeded, the limit should be modified to cover new case. To avoid similar situation in future, some tolerance should be applied (5-10%). 3 Decision Making Attributes, agents, environment and basic principles were presented in the previous part. The decisionmaking principle will be described in this part of the chapter. 3.1 Decision Making Procedure The basic component in decision making process (DMP) is an attribute. Attribute, as a piece of information, is important only in context of an intended plan, an activity. As descriptor of the environment, it serves purpose of showing how would the environment look if certain action is taken, or how does it look at the moment. Every attribute is only a piece of mosaic, together they form image of the whole world of an agent. Using knowledge engineering perspective where data, information and knowledge usually form three-level hierarchy, see (Awad & Ghaziri, 2004), the attribute stands at the level of information, because it is always situated in some context. Attributes also describe environment in time line. Present, past and future configuration of environment - all these are described in a form of attributes, allowing agent to learn from past decisions as well as predict the future (to a limited extent). What we are trying to do when making decision is not undergoing immediate action action is only consequence and implementation of reached decision but rather future configuration of an environment. This is shown in the Figure 7. We may distinguish between two planes of existence: reality, named act space in the Figure 7, in which the agent is situated at the moment of decision making, and several

14 possible outcomes of actions in the consequence space. Each of n arrows represents solution to given decision making situation and whole sequence of actions (plan) is hidden behind it. Figure 7: Act and consequence space When making decision, following formula is used for every considered variant V: y ( ) c(v ) = w x norm(a x ), (12) x=1 where c(v) represents suitability of the variant V. Each variant`s attributes` values are calculated, using Eq. (12) and the importance of each attribute a is expressed by weight w. The solution we are looking for is the variant with highest c(v) value: solution = max{ c(v 1 ),c(v 2 ),,c(v Z )}. (13) In order to function properly, there is a threshold value defined for agent. Its purpose is to allow agent to work without interruption unless something serious happens. Minor disturbances should be ignored. Setting of specific level of threshold is designer`s decision. It may be modified later by using appropriate method of machine learning. In decision making procedure, a universal sequence of steps shown in the Figure 8 takes place. The sequence in the Figure 8 begins at the DMP (decision making point). This is a point in time when agent receives a stimulus to act. There are generally two types of such stimuli, usually implemented together either there is a change in agent`s environment which is possibly interfering with agent`s current state/activity, or its state can be checked periodically. Former can reflect presence of exogenous factors interfering with on-going agent`s activity. Each state has defined set of attributes for monitoring and these can interrupt current state, if the stimulus is strong enough. In any case, it is not necessary to monitor all attributes defined in environment. Strength of the stimulus is measured by threshold value (i.e. it is compared to it). If the impulse is not strong enough (i.e. lower or equal to threshold), agent remains in its original state, if it is exceeding threshold value, agent will leave its original state. To make decision making easier, number of variants is taken into consideration in following step. When there is only one option available, there is only behavioral shift towards another agent`s activity, because no decision making is required as the agent only needs to change its state to another one and only one option is available to it. In

15 case of more than one solution, decision making procedure takes place, as it is described above, using Eqs. (12) and (13). Suitability of each considered variant is computed using (12). Received value c(v) is stored and solution V x with highest c(v x )value is applied. Procedure described in the Figure 8 is taking advantage of state representation for agent`s behavior which is described in the following part. Figure 8: Decision making procedure 3.2 State Representation In proposed control method, notion of state refers to internal configuration of agent while it is performing action over given period of time. Two types of states are recognized: long and short. Distinction is made according to timeframe of agent`s activity. Long states represent continuous activity of an agent with unreachable goal state or an activity that is to be maintained for indefinite amount of time. Typically, long state is a long-term activity. Long states will be using notation S 0 (letter S for state, subscript 0 is reserved for long states, this notation will be explained further in the following text). Short states represent activity of agent which is performed over limited time or when goal is reachable in a short period of time. For temporary states, following notation will be used: S n (letter S for state, subscript n is a number > 0 (0 is reserved for long states)). Concept of states is useful for several reasons. Most of all, it allows modularity in defining agent`s behavior. Each state is derived from the activity agent performs from its actuator`s activity. The performed activity can even be quite complex one, using more actuators at the same time, but typically it is aimed at modification of one or more of environment`s attributes (and as a result shifting the state of environment towards intended goal state). The presence of goal is crucial here agent is trying to reach its goal all the time and without its specification there is no way to decide which action should be taken at the given moment. Use of states allows form up agent`s behavior piece by piece, from small parts to complex blocks of actions. States are organized into segments. There is always exactly one long state and any number of short states (even none) present in single segment. Number of segments is unlimited, but at least one segment must be defined for the agent in order to function. This is, however, a minimal case. To distinguish individual segments, a letter in superscript is used, e.g. S ( S represents state, number in subscript position

16 is determining long/short state number, letter in superscript position is designation of segment). Segments are connected as it is needed (by designer of the system), but there always has to by a cycle present and all states should be reachable. This formal form of notation will sometimes be reduced in figures because relation of state to segment will be obvious from the context. For state representation, notation shown in the Figure 9 will be used. Figure 9: Graphical notation of state representation of agent`s behavior The long state is indicated by number 0 (there are no segments here at this figure, so no further description is needed) and double circle. Single circles with numbers > 0 represent short states. Figure 9 also shows allowed transitions between states. There is only one limitation here: there can be no transitions between short states in backwards manner. The rule is that from the short state, the only possible transition is forward or to the long state. There are two main reasons to establish such rule. Firstly, it is easier and more comprehensible for the designer of the system. Secondly, it prevents possible cycle deadlock between short states. It is also easier to find possible error or malfunction with such restriction. Figure 10 indicates layout or elements. It is divided into segments where near to the center are situated long states and short states are on the outside. Transitions between states are indicated in the segment C of the Figure 10. It was already mentioned before; one long state is in each segment along with any number of short states (even none). Figure 10: Organization of information

17 3.3 State Chart and its Description In this part, an example of the state chart will be presented along with description of some of its important characteristics. The example is in Figure 11. This is graph representation of agent`s behavior where nodes are states and edges indicate transitions between them. It is oriented graph. The example shown in the Figure 11 can be represented in a form of transition matrix. In this matrix, number one indicates existing transition between states, zero means no transition exists. Since the graph is oriented, matrix in not symmetric. Last column on the right in Table 1 is sum of outgoing edges, i.e. outgoing degree of a graph node. When decision procedure takes place, complete decision making is done only in nodes with a star * symbol. This is where more than one option is available to the agent. Figure 11: Example of state chart. DIRECTION FROM DIRECTION TO S S S S S S S S S S S * S S * S S * S S S * S S * Table 1: Representation of state transitions in matrix.

18 Number of variants taken into decision-making process is given by the sum in each row of the transition matrix (Tab. 1). If the sum in the row is equal to 1, then the agent is deciding whether to stay in the current stay or continue to the next one. If the goal state in such state is reached, agent moves on to another state and adopts another goal to reach. Since agents in AmIE represent appliances and equipment (indoors case of ambient intelligence in environment), it is usually good practice to create one more segment for maintenance purposes in case of any malfunction. This will allow agent to act correctly when malfunction occurs, log entry about event is to be created and possibly some user`s assistance requested, if necessary. As there are many types of possible failures, this matter will not be discussed in detail, but this maintenance segment can often be shared among many agents, as the procedure is same for all of them (or it probably will be very similar). Presented example is general (due to the space limitation), but practical example modeling practical application may be found in (Mls & Tucnik, 2010). 3.4 User-Oriented Approach One of the most desired features of ambient systems is to be helpful towards users and their needs. This is supposed to be realized discreetly while system requires only minimal interaction with user. Ambient elements in homes or offices often serve other functions as well maintaining optimal energy consumption, temperature, lights, security etc. Some of these features may not be so obvious. Nonetheless, the user-oriented features are important part of any such system, and they are expected to be incorporated. Proposed system allows adaptation of user needs through modularity of actions. Individual pieces of behavior can be arranged in many ways, fulfilling individual user`s needs. For every user, there can be behavioral pattern defined, similar to one from the example schematics in the Figure 11. This can be described in more detail and more formally, but it would be beyond intended scope of this text. When designing the system, the difficult part is to create attributes and implement atomic actions of agents. The more complex behavior is created by merging these simple parts together. It is expected that system acts according to user`s preferences. This requires testing period, during which system consults the user and makes required adjustments. After testing period, system acts independently on the user, but also allows modifications at any time. Proposed control method is capable of manifestation of some form of creativity in its activities. Since agents are trying to find the most appropriate reaction to each given situation, it is possible to perceive changes in system`s response to the same signal over time. Of course, this requires some form of machine learning to be implemented in the system, see (Dembczyński et al., 2010) for machine learning issues connected with multiple attribute ranking problems. Possibly the most appropriate form of machine learning for this type of control is reinforced learning, see (Russell & Norvig, 2009, Pfeifer & Scheier, 2001). In general, supervised learning methods may be used during testing period, unsupervised after. Use of supervised learning during normal use would require cooperation of user and could be possibly perceived as a malfunction or ineffectiveness of the system since it would interfere with every-day use. It is to be expected that effort to find the optimal solution would always require exploration of several suboptimal routes. In any case, this is to be a part of future work. As relevant source of information on MCDM creativity, see (Skulimowski, 2011). Setting system to act accordingly to single user`s preferences is not complicated. It is also nothing new in the area of ambient intelligence these features are widely available in commercial products of

19 today, see (Cook & Das, 2004). Conflicts of multiple users` goals (or users with different access rights towards system`s features, e.g. adults and children have different access rights) which frequently occur creates problems when they are living or working together. This is part of future work: to focus on autonomous solution of conflicts between multiple users` goals or preferences. 4 Conclusion and Future Work This text s purpose was to mediate the ideas about use of multi-criteria decision-making control of AmI in environment which consists of cooperating agents. The main portion of work lies in designing such system properly, before it is put into practice. When this is done, the control mechanism is able to act quickly and follow appropriate behavioral patterns, performing desired activities. Proposed system allows user-defined patterns of behavior to be easily implemented, once the background description of environment is complete. In practice, this is often the most required system`s capability to follow behavioral routines according to given situation. Multi-criteria decision-making allows system to recognize such situations correctly and apply convenient action without user`s intervention. This will increase comfort of the system`s user and creates user-friendly (possibly no-disturbing) interaction. There are open problems to be solved in future: collisions between users with different access rights, cooperation models/architectures of agents, recognition of user`s activity (which is necessary to arrange help in his/her efforts) and others. One of undeniable positive qualities of use of the MCDM control method is simple incorporation of new information (attributes), agents or other features into system. System is able to act quickly and when designed properly, fulfill important expectations users have of ambient technologies when creating smart homes or offices. Acknowledgements This work and contribution were supported by the project of Czech Science Foundation SMEW Smart Environments at Workplaces no. P403/10/1310, and Specific Research Project AmIRRA 2 Ambient Intelligence and Related Research Activities. References Augusto, J. C. & McCullagh, P. (2007). Ambient Intelligence: Concepts and Applications. In ComSIS Vol. 4, No. 1. Awad, E. M. & Ghaziri, H. M. (2004). Knowledge Management. New Jersey: Pearson Education. Bottaro, A., Bourcier, J., Escoffier, C. & Lalanda, P. (2007). Context-Aware Service Composition in a Home Control Gateway. In IEEE International Conference on Pervasive Services. Cook, D. J. & Das, S. (2004). Smart Environments: Technologies, Protocols and Applications. New Jersey: John Wiley and Sons.

Agents and Environments

Agents and Environments Agents and Environments Berlin Chen 2004 Reference: 1. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Chapter 2 AI 2004 Berlin Chen 1 What is an Agent An agent interacts with its

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Intelligent Agents Chapter 2 & 27 What is an Agent? An intelligent agent perceives its environment with sensors and acts upon that environment through actuators 2 Examples of Agents

More information

CS324-Artificial Intelligence

CS324-Artificial Intelligence CS324-Artificial Intelligence Lecture 3: Intelligent Agents Waheed Noor Computer Science and Information Technology, University of Balochistan, Quetta, Pakistan Waheed Noor (CS&IT, UoB, Quetta) CS324-Artificial

More information

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents Introduction to Artificial Intelligence 2 nd semester 2016/2017 Chapter 2: Intelligent Agents Mohamed B. Abubaker Palestine Technical College Deir El-Balah 1 Agents and Environments An agent is anything

More information

Chapter 2: Intelligent Agents

Chapter 2: Intelligent Agents Chapter 2: Intelligent Agents Outline Last class, introduced AI and rational agent Today s class, focus on intelligent agents Agent and environments Nature of environments influences agent design Basic

More information

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence CmpE 540 Principles of Artificial Intelligence Intelligent Agents Pınar Yolum pinar.yolum@boun.edu.tr Department of Computer Engineering Boğaziçi University 1 Chapter 2 (Based mostly on the course slides

More information

Intelligent Agents. Russell and Norvig: Chapter 2

Intelligent Agents. Russell and Norvig: Chapter 2 Intelligent Agents Russell and Norvig: Chapter 2 Intelligent Agent? sensors agent actuators percepts actions environment Definition: An intelligent agent perceives its environment via sensors and acts

More information

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT KECERDASAN BUATAN 3 By @Ir.Hasanuddin@ Sirait Why study AI Cognitive Science: As a way to understand how natural minds and mental phenomena work e.g., visual perception, memory, learning, language, etc.

More information

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world This course is about designing intelligent agents and environments Rationality The vacuum-cleaner world The concept of rational behavior. Environment types Agent types 1 An agent is an entity that perceives

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 2. Rational Agents Nature and Structure of Rational Agents and Their Environments Wolfram Burgard, Bernhard Nebel and Martin Riedmiller Albert-Ludwigs-Universität

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Solutions for Chapter 2 Intelligent Agents

Solutions for Chapter 2 Intelligent Agents Solutions for Chapter 2 Intelligent Agents 2.1 This question tests the student s understanding of environments, rational actions, and performance measures. Any sequential environment in which rewards may

More information

Artificial Intelligence. Intelligent Agents

Artificial Intelligence. Intelligent Agents Artificial Intelligence Intelligent Agents Agent Agent is anything that perceives its environment through sensors and acts upon that environment through effectors. Another definition later (Minsky) Humans

More information

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents Contents Foundations of Artificial Intelligence 2. Rational s Nature and Structure of Rational s and Their s Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität Freiburg May

More information

Field data reliability analysis of highly reliable item

Field data reliability analysis of highly reliable item Field data reliability analysis of highly reliable item David Valis University of Defence, Czech Republic david.valis@unob.cz Miroslav Koucky Technical University of Liberec, Czech Republic miroslav.koucky@tul.cz

More information

Web-Mining Agents Cooperating Agents for Information Retrieval

Web-Mining Agents Cooperating Agents for Information Retrieval Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen) Literature Chapters 2, 6, 13, 15-17

More information

Agents. Environments Multi-agent systems. January 18th, Agents

Agents. Environments Multi-agent systems. January 18th, Agents Plan for the 2nd hour What is an agent? EDA132: Applied Artificial Intelligence (Chapter 2 of AIMA) PEAS (Performance measure, Environment, Actuators, Sensors) Agent architectures. Jacek Malec Dept. of

More information

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology Intelligent Autonomous Agents Ralf Möller, Rainer Marrone Hamburg University of Technology Lab class Tutor: Rainer Marrone Time: Monday 12:15-13:00 Locaton: SBS93 A0.13.1/2 w Starting in Week 3 Literature

More information

Answers to end of chapter questions

Answers to end of chapter questions Answers to end of chapter questions Chapter 1 What are the three most important characteristics of QCA as a method of data analysis? QCA is (1) systematic, (2) flexible, and (3) it reduces data. What are

More information

AI: Intelligent Agents. Chapter 2

AI: Intelligent Agents. Chapter 2 AI: Intelligent Agents Chapter 2 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything

More information

AI Programming CS F-04 Agent Oriented Programming

AI Programming CS F-04 Agent Oriented Programming AI Programming CS662-2008F-04 Agent Oriented Programming David Galles Department of Computer Science University of San Francisco 04-0: Agents & Environments What is an Agent What is an Environment Types

More information

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES Reactive Architectures LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas There are many unsolved (some would say insoluble) problems

More information

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions Dr Terry R. Payne Department of Computer Science General control architecture Localisation Environment Model Local Map Position

More information

Intelligent Agents. Outline. Agents. Agents and environments

Intelligent Agents. Outline. Agents. Agents and environments Outline Intelligent Agents Chapter 2 Source: AI: A Modern Approach, 2 nd Ed Stuart Russell and Peter Norvig Agents and environments Rationality (Performance measure, Environment, Actuators, Sensors) Environment

More information

Artificial Intelligence CS 6364

Artificial Intelligence CS 6364 Artificial Intelligence CS 6364 Professor Dan Moldovan Section 2 Intelligent Agents Intelligent Agents An agent is a thing (e.g. program, or system) that can be viewed as perceiving its environment and

More information

Agents and Environments

Agents and Environments Artificial Intelligence Programming s and s Chris Brooks 3-2: Overview What makes an agent? Defining an environment Types of agent programs 3-3: Overview What makes an agent? Defining an environment Types

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence COMP-241, Level-6 Mohammad Fahim Akhtar, Dr. Mohammad Hasan Department of Computer Science Jazan University, KSA Chapter 2: Intelligent Agents In which we discuss the nature of

More information

A Cooperative Multiagent Architecture for Turkish Sign Tutors

A Cooperative Multiagent Architecture for Turkish Sign Tutors A Cooperative Multiagent Architecture for Turkish Sign Tutors İlker Yıldırım Department of Computer Engineering Boğaziçi University Bebek, 34342, Istanbul, Turkey ilker.yildirim@boun.edu.tr 1 Introduction

More information

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents CS 331: Artificial Intelligence Intelligent Agents 1 General Properties of AI Systems Sensors Reasoning Actuators Percepts Actions Environment This part is called an agent. Agent: anything that perceives

More information

CS343: Artificial Intelligence

CS343: Artificial Intelligence CS343: Artificial Intelligence Introduction: Part 2 Prof. Scott Niekum University of Texas at Austin [Based on slides created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All materials

More information

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Module 1. Introduction. Version 1 CSE IIT, Kharagpur Module 1 Introduction Lesson 2 Introduction to Agent 1.3.1 Introduction to Agents An agent acts in an environment. Percepts Agent Environment Actions An agent perceives its environment through sensors.

More information

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents CS 331: Artificial Intelligence Intelligent Agents 1 General Properties of AI Systems Sensors Reasoning Actuators Percepts Actions Environment This part is called an agent. Agent: anything that perceives

More information

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems General Properties of AI Systems CS 331: Artificial Intelligence Intelligent Agents Sensors Reasoning Actuators Percepts Actions Environmen nt This part is called an agent. Agent: anything that perceives

More information

Web-Mining Agents Cooperating Agents for Information Retrieval

Web-Mining Agents Cooperating Agents for Information Retrieval Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Tanya Braun (Übungen) Organizational Issues: Assignments Start:

More information

Robotics Summary. Made by: Iskaj Janssen

Robotics Summary. Made by: Iskaj Janssen Robotics Summary Made by: Iskaj Janssen Multiagent system: System composed of multiple agents. Five global computing trends: 1. Ubiquity (computers and intelligence are everywhere) 2. Interconnection (networked

More information

1) Principle of Proactivity

1) Principle of Proactivity 1) Principle of Proactivity Principle of proactivity teaches us that we can influence our life to a much greater extend than we usually think. It even states that we are the cause of majority of things

More information

Artificial Intelligence Intelligent agents

Artificial Intelligence Intelligent agents Artificial Intelligence Intelligent agents Peter Antal antal@mit.bme.hu A.I. September 11, 2015 1 Agents and environments. The concept of rational behavior. Environment properties. Agent structures. Decision

More information

CS 771 Artificial Intelligence. Intelligent Agents

CS 771 Artificial Intelligence. Intelligent Agents CS 771 Artificial Intelligence Intelligent Agents What is AI? Views of AI fall into four categories 1. Thinking humanly 2. Acting humanly 3. Thinking rationally 4. Acting rationally Acting/Thinking Humanly/Rationally

More information

How do you design an intelligent agent?

How do you design an intelligent agent? Intelligent Agents How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts rationally upon that environment with its effectors. A discrete

More information

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department Princess Nora University Faculty of Computer & Information Systems 1 ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department (CHAPTER-3) INTELLIGENT AGENTS (Course coordinator) CHAPTER OUTLINE What

More information

A Computational Framework for Concept Formation for a Situated Design Agent

A Computational Framework for Concept Formation for a Situated Design Agent A Computational Framework for Concept Formation for a Situated Design Agent John S Gero Key Centre of Design Computing and Cognition University of Sydney NSW 2006 Australia john@arch.usyd.edu.au and Haruyuki

More information

Overview. What is an agent?

Overview. What is an agent? Artificial Intelligence Programming s and s Chris Brooks Overview What makes an agent? Defining an environment Overview What makes an agent? Defining an environment Department of Computer Science University

More information

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525 Agents and Environments Stephen G. Ware CSCI 4525 / 5525 Agents An agent (software or hardware) has: Sensors that perceive its environment Actuators that change its environment Environment Sensors Actuators

More information

Intelligent Agents. Instructor: Tsung-Che Chiang

Intelligent Agents. Instructor: Tsung-Che Chiang Intelligent Agents Instructor: Tsung-Che Chiang tcchiang@ieee.org Department of Computer Science and Information Engineering National Taiwan Normal University Artificial Intelligence, Spring, 2010 Outline

More information

Artificial Intelligence

Artificial Intelligence Politecnico di Milano Artificial Intelligence Artificial Intelligence From intelligence to rationality? Viola Schiaffonati viola.schiaffonati@polimi.it Can machine think? 2 The birth of Artificial Intelligence

More information

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2 Intelligent Agents CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, Chapter 2 Outline Agents and environments

More information

Perceptual Anchoring with Indefinite Descriptions

Perceptual Anchoring with Indefinite Descriptions Perceptual Anchoring with Indefinite Descriptions Silvia Coradeschi and Alessandro Saffiotti Center for Applied Autonomous Sensor Systems Örebro University, S-70182 Örebro, Sweden silvia.coradeschi, alessandro.saffiotti

More information

From where does the content of a certain geo-communication come? semiotics in web-based geo-communication Brodersen, Lars

From where does the content of a certain geo-communication come? semiotics in web-based geo-communication Brodersen, Lars Downloaded from vbn.aau.dk on: april 02, 2019 Aalborg Universitet From where does the content of a certain geo-communication come? semiotics in web-based geo-communication Brodersen, Lars Published in:

More information

Intelligent Agents. Philipp Koehn. 16 February 2017

Intelligent Agents. Philipp Koehn. 16 February 2017 Intelligent Agents Philipp Koehn 16 February 2017 Agents and Environments 1 Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P A

More information

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures Intelligent Autonomous Agents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu 1 Lecture #5 Reactive and Hybrid Agents Reactive Architectures Brooks and behaviors The subsumption architecture

More information

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit Lecture Notes, Advanced Artificial Intelligence (SCOM7341) Sina Institute, University of Birzeit 2 nd Semester, 2012 Advanced Artificial Intelligence (SCOM7341) Chapter 2 Intelligent Agents Dr. Mustafa

More information

Intelligent Agents. Instructor: Tsung-Che Chiang

Intelligent Agents. Instructor: Tsung-Che Chiang Intelligent Agents Instructor: Tsung-Che Chiang tcchiang@ieee.org Department of Computer Science and Information Engineering National Taiwan Normal University Artificial Intelligence, Spring, 2010 Outline

More information

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,

More information

Choose an approach for your research problem

Choose an approach for your research problem Choose an approach for your research problem This course is about doing empirical research with experiments, so your general approach to research has already been chosen by your professor. It s important

More information

Exploration and Exploitation in Reinforcement Learning

Exploration and Exploitation in Reinforcement Learning Exploration and Exploitation in Reinforcement Learning Melanie Coggan Research supervised by Prof. Doina Precup CRA-W DMP Project at McGill University (2004) 1/18 Introduction A common problem in reinforcement

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19 Agent-Based Systems Michael Rovatsos mrovatso@inf.ed.ac.uk Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19 Where are we? Last time... Practical reasoning agents The BDI architecture Intentions

More information

Grounding Ontologies in the External World

Grounding Ontologies in the External World Grounding Ontologies in the External World Antonio CHELLA University of Palermo and ICAR-CNR, Palermo antonio.chella@unipa.it Abstract. The paper discusses a case study of grounding an ontology in the

More information

Vorlesung Grundlagen der Künstlichen Intelligenz

Vorlesung Grundlagen der Künstlichen Intelligenz Vorlesung Grundlagen der Künstlichen Intelligenz Reinhard Lafrenz / Prof. A. Knoll Robotics and Embedded Systems Department of Informatics I6 Technische Universität München www6.in.tum.de lafrenz@in.tum.de

More information

22c:145 Artificial Intelligence

22c:145 Artificial Intelligence 22c:145 Artificial Intelligence Fall 2005 Intelligent Agents Cesare Tinelli The University of Iowa Copyright 2001-05 Cesare Tinelli and Hantao Zhang. a a These notes are copyrighted material and may not

More information

CONSTELLATIONS AND LVT

CONSTELLATIONS AND LVT CONSTELLATIONS AND LVT Constellations appears in our comparative grid as innovation in natural systems, associated with emergence and self-organisation. It is based on ideas of natural harmony and relates

More information

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Intelligent Agents. Chapter 2 ICS 171, Fall 2009 Intelligent Agents Chapter 2 ICS 171, Fall 2009 Discussion \\Why is the Chinese room argument impractical and how would we have to change the Turing test so that it is not subject to this criticism? Godel

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - - Electrophysiological Measurements Psychophysical Measurements Three Approaches to Researching Audition physiology

More information

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Ryo Izawa, Naoki Motohashi, and Tomohiro Takagi Department of Computer Science Meiji University 1-1-1 Higashimita,

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - Electrophysiological Measurements - Psychophysical Measurements 1 Three Approaches to Researching Audition physiology

More information

Intelligent Agents. Chapter 2

Intelligent Agents. Chapter 2 Intelligent Agents Chapter 2 Outline Agents and environments Rationality Task environment: PEAS: Performance measure Environment Actuators Sensors Environment types Agent types Agents and Environments

More information

To conclude, a theory of error must be a theory of the interaction between human performance variability and the situational constraints.

To conclude, a theory of error must be a theory of the interaction between human performance variability and the situational constraints. The organisers have provided us with a both stimulating and irritating list of questions relating to the topic of the conference: Human Error. My first intention was to try to answer the questions one

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Visual Processing (contd.) Pattern recognition. Proximity the tendency to group pieces that are close together into one object.

Visual Processing (contd.) Pattern recognition. Proximity the tendency to group pieces that are close together into one object. Objectives of today s lecture From your prior reading and the lecture, be able to: explain the gestalt laws of perceptual organization list the visual variables and explain how they relate to perceptual

More information

Convergence Principles: Information in the Answer

Convergence Principles: Information in the Answer Convergence Principles: Information in the Answer Sets of Some Multiple-Choice Intelligence Tests A. P. White and J. E. Zammarelli University of Durham It is hypothesized that some common multiplechoice

More information

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n.

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n. University of Groningen Decision support in aneastehesia Geus, Arian Fred de; Rotterdam, Ernest Peter IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to

More information

Reactivity and Deliberation in Decision-Making Systems

Reactivity and Deliberation in Decision-Making Systems 11 Reactivity and Deliberation in Decision-Making Systems Carle Côté 11.1 Introduction 11.2 Let s Begin at the Beginning 11.3 Common Pitfall #1 : One Decision Model to Rule Them All! 11.4 Common Pitfall

More information

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc.

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc. An Advanced Pseudo-Random Data Generator that improves data representations and reduces errors in pattern recognition in a Numeric Knowledge Modeling System Errol Davis Director of Research and Development

More information

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity Ahmed M. Mahran Computer and Systems Engineering Department, Faculty of Engineering, Alexandria University,

More information

Evolutionary Programming

Evolutionary Programming Evolutionary Programming Searching Problem Spaces William Power April 24, 2016 1 Evolutionary Programming Can we solve problems by mi:micing the evolutionary process? Evolutionary programming is a methodology

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

BlueBayCT - Warfarin User Guide

BlueBayCT - Warfarin User Guide BlueBayCT - Warfarin User Guide December 2012 Help Desk 0845 5211241 Contents Getting Started... 1 Before you start... 1 About this guide... 1 Conventions... 1 Notes... 1 Warfarin Management... 2 New INR/Warfarin

More information

Learning to Use Episodic Memory

Learning to Use Episodic Memory Learning to Use Episodic Memory Nicholas A. Gorski (ngorski@umich.edu) John E. Laird (laird@umich.edu) Computer Science & Engineering, University of Michigan 2260 Hayward St., Ann Arbor, MI 48109 USA Abstract

More information

Reliability, validity, and all that jazz

Reliability, validity, and all that jazz Reliability, validity, and all that jazz Dylan Wiliam King s College London Introduction No measuring instrument is perfect. The most obvious problems relate to reliability. If we use a thermometer to

More information

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives. ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 16 comp329-2013-parsons-lect16 2/44 Today We will start on the second part of the course Autonomous

More information

Assessing Modes of Interaction

Assessing Modes of Interaction Project 2 Assessing Modes of Interaction Analysis of exercise equipment Overview For this assignment, we conducted a preliminary analysis of two similar types of elliptical trainers. We are able to find

More information

PART - A 1. Define Artificial Intelligence formulated by Haugeland. The exciting new effort to make computers think machines with minds in the full and literal sense. 2. Define Artificial Intelligence

More information

What is AI? The science of making machines that:

What is AI? The science of making machines that: What is AI? The science of making machines that: Think like humans Think rationally Act like humans Act rationally Thinking Like Humans? The cognitive science approach: 1960s ``cognitive revolution'':

More information

ERA: Architectures for Inference

ERA: Architectures for Inference ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems

More information

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis Data mining for Obstructive Sleep Apnea Detection 18 October 2017 Konstantinos Nikolaidis Introduction: What is Obstructive Sleep Apnea? Obstructive Sleep Apnea (OSA) is a relatively common sleep disorder

More information

Robot Learning Letter of Intent

Robot Learning Letter of Intent Research Proposal: Robot Learning Letter of Intent BY ERIK BILLING billing@cs.umu.se 2006-04-11 SUMMARY The proposed project s aim is to further develop the learning aspects in Behavior Based Control (BBC)

More information

Technical Specifications

Technical Specifications Technical Specifications In order to provide summary information across a set of exercises, all tests must employ some form of scoring models. The most familiar of these scoring models is the one typically

More information

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA 1 Intelligent Agents BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University Slides are mostly adapted from AIMA Outline 2 Agents and environments Rationality PEAS (Performance

More information

You can use this app to build a causal Bayesian network and experiment with inferences. We hope you ll find it interesting and helpful.

You can use this app to build a causal Bayesian network and experiment with inferences. We hope you ll find it interesting and helpful. icausalbayes USER MANUAL INTRODUCTION You can use this app to build a causal Bayesian network and experiment with inferences. We hope you ll find it interesting and helpful. We expect most of our users

More information

1 The conceptual underpinnings of statistical power

1 The conceptual underpinnings of statistical power 1 The conceptual underpinnings of statistical power The importance of statistical power As currently practiced in the social and health sciences, inferential statistics rest solidly upon two pillars: statistical

More information

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab Intelligent Machines That Act Rationally Hang Li Toutiao AI Lab Four Definitions of Artificial Intelligence Building intelligent machines (i.e., intelligent computers) Thinking humanly Acting humanly Thinking

More information

Analogical Inference

Analogical Inference Analogical Inference An Investigation of the Functioning of the Hippocampus in Relational Learning Using fmri William Gross Anthony Greene Today I am going to talk to you about a new task we designed to

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems

Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems Yingping Huang *, David Antory, R. Peter Jones, Craig Groom, Ross McMurran, Peter Earp and Francis Mckinney International

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

EEL-5840 Elements of {Artificial} Machine Intelligence

EEL-5840 Elements of {Artificial} Machine Intelligence Menu Introduction Syllabus Grading: Last 2 Yrs Class Average 3.55; {3.7 Fall 2012 w/24 students & 3.45 Fall 2013} General Comments Copyright Dr. A. Antonio Arroyo Page 2 vs. Artificial Intelligence? DEF:

More information

The Power of Feedback

The Power of Feedback The Power of Feedback 35 Principles for Turning Feedback from Others into Personal and Professional Change By Joseph R. Folkman The Big Idea The process of review and feedback is common in most organizations.

More information

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente. Silvia Rossi Agent as Intentional Systems 2 Lezione n. Corso di Laurea: Informatica Insegnamento: Sistemi multi-agente Email: silrossi@unina.it A.A. 2014-2015 Agenti e Ambienti (RN, WS) 2 Environments

More information