CSCE 523 Artificial Intelligence Assignment 1 to 4 solutions

$90.00

Original Work ?

Download Details:

  • Name: CSCE523_Assignments-istaef.zip
  • Type: zip
  • Size: 960.43 KB

Category: You will Instantly receive a download link upon Payment||Click Original Work Button for Custom work

Description

5/5 - (1 vote)

Assignment 1 CSCE 523 Agents and Uninformed Search

1. (10 points) What is intelligence? There are several research views that can
be taken to understand intelligence, for the purpose of this discussion, let
us label them philosophical, physicalism, and psychological. Almost every
AI algorithm we will discuss is related to one or more of these views.

Read
the Stanford Encyclopedia of Philosophy articles on the Theory of Mind
topics of Qualia and Physicalism (https://plato.stanford.edu/entries/qualia/
https://plato.stanford.edu/entries/physicalism/), and the Sweller paper on
Human Cognitive Architecture
(https://www.csuchico.edu/~nschwartz/Sweller_2008.pdf ) {note: this is
the closest I have found to a good short discussion on cognitive psychology
and the modeling of it}

a. Which view do you feel is more correct? And why?
b. Given your choice, is an artificially generated intelligence possible?
Why or why not?

2. (15 points) Testing for intelligence: Read Turing’s original paper (available
on-line at: https://www.abelard.org/turpap/turpap.htm). In the paper, he
discusses several potential objections to his proposed enterprise and his
test for intelligence. Also, refer to the discussion on the Chinese Room
Argument found in the text, slides and at:
https://plato.stanford.edu/entries/chinese-room/ or video:
https://www.open.edu/openlearn/history-the-arts/culture/philosophy/60-
second-adventures-thought?track=04fb8b569.

a. Which objections still carry some weight? Are his refutations valid?
b. Can you think of new objections arising from developments since
he wrote the paper?
c. Do you think that there is a better test that could be proposed?

3. (5 points) Intelligence and computational limits: There are well-known
classes of problems that are intractably difficult for computers and other
classes that are provably undecideable by any computer. Does this mean
that strong (human level) AI is impossible? Why or why not?

4. (10 points) Consider a modified version of the vacuum-cleaner world
(depicted in Figures 2.2 and 2.3 and specified on pages 35-36), in which
the geography of the environment – its extent, boundaries, dirt locations,
and obstacles is unknown; the agent can go Up and Down as well as Left
and Right.

a. Can a simple reflex agent be perfectly rational for this
environment? Explain.

b. Can you design an environment in which your randomized agent
will perform very poorly? If not explain, if yes provide an example.
c. Can a reflex agent with state outperform a simple reflex agent?
Why?

5. (20 points) For the following tree, show the lists of open and visited nodes
for each cycle of the listed search algorithms. Expand the nodes in a right
to left ordering. The start node is S and the goal node is X. The numbers
next to the edges indicate the associated cost. Note: follow the format from
class.
a. Breadth-first search
b. Depth-first search
c. Uniform cost search

6. (40 points) Ah, Christmas is over. Now, that horrible mall traffic should let
up… what is this, a traffic jam?
Seems that everyone is stuck and you can’t get through. You must direct
the trapped shoppers out of your way and maneuver your trusty vehicle
through the maze to get back home in one piece.

For this problem, implement a move optimal solver for the Rush Hour
puzzle. Rush Hour is a sliding block puzzle containing 4 trucks, 11 cars, and
your vehicle placed on a 6×6 grid, with impenetrable walls surrounding its
perimeter except for a one cell exit edge. The trucks occupy an area of one
by three cells, and the cars occupy an area of or one by two adjacent grid
cells.

All vehicles can only move forwards or backwards, never overlapping.
The goal is to move the vehicles one at a time so that your car may exit.
The implementation should read a set of puzzles from a text file (format
shown in the example below). Where A1 to K1 are cars, O1 to R1 are trucks,
and X0 is your vehicle. The exit is always at the third position down on the
right, and each empty grid location is (..). The file begins with the number
of puzzles found in the file.

Expected turn-in is the 1) source code, 2) compilation and execution
instructions, and 3) a short paper describing your solution, results and any
search enhancements. A set of 5 problems are provided for testing your
search algorithm during development (simple.txt), and 5 that must be
tested against in the report (hard.txt), be sure to include your timing and
best path results for each of the problems your search can solve. Your
implementation must be your work only.

As for what enhancements to try
– take a look at the publications of Andreas Junghanns on Sokoban
(https://www.cs.ualberta.ca/~games/Sokoban/papers.html).
To get you started, I have provided Java and MATLAB code that will read
the files and handle the state updating (both are in the class directory). For
the Java, just implement the search interface in each of your own
search(es).

For MATLAB, fill in the search function. Note on implementation
language – you may find that MATLAB is quicker to write than Java,
however you will take a significant performance hit. Using a BFS, the
runtime comparison between the implementations on the simple.txt
problems are:
Problem Java Matlab
1 0.003s 0.03s
2 0.08s 0.13s
3 0.50s 4.30s
4 0.44s 755.89s (13 min)
5 0.92s (7hr 36min)
1
O1..Q1Q1Q1B1
O1….C1..B1
O1X0X0C1…. <= Exit is here, this comment and the arrow
……P1…. does not appear in the file.
……P1….
……P1….

Assignment 2 CSCE 523 Search and Game Tree Search

1. (30 points) For the following maze, show the lists of open and visited nodes
(with their associated costs) for each cycle of the listed search algorithms.

The start cell is S and the goal cell is G. The agent can move north, south,
east, and west. The agent expends 1 point moving south or west, and 2
points moving north or east.

a. Decide on a heuristic estimator function and write the function out.
b. Decide on a method to break ties (label all cells with letters and break
alphabetically, first come first served, etc.) and write that out
Then perform the following searches on the space (follow the format from
class):

c. Beam search with a beam size of 2
d. IDA* search
S
G
5
4
3
2
A B C D E F G
1

2. (20 points) For the following Light-Up puzzle, show the sequence of variable
assignments during backtracking with forward checking; examine cells in
alphabetical order. Show assignments by writing the forward checking table
process (14 columns: step number, a value, b value, …, l value,
backtrack{list the constraint violation that causes the backtrack}).
a b c d
E 1 2 f
G 0 h
I j k l

3. (50 points) Below are the rules for the game Lines of Action. For this part
of the assignment you are to write a board evaluator, and a correct alphabeta minimax search for the game.

I have provided Java files to maintain
the board state, to test the legality of a move, and to generate a list of
possible moves as a Vector for one or all of the pieces in the class directory.

You are responsible for completing the LOABoard.heuristicEvaluation(), and
writing a MinimaxAlphaBetaSearch class. Do not feel constrained by my
code, if you feel that you need additional elements or different functionality
feel free to change it, but be sure to document the changes.

Notes:
In the LOABoard class, set the BOARD_SIZE to be 4 or 5 for easier
debugging (mnkGame, i.e. TicTacToe is also included).
In the LOACustomPanel class, set the SELF_PLAY to true if you want to play
against yourself to get a feel for the rules.

Lines of Action Rules:
1. The black pieces are placed in two rows along the top and bottom of
the board, while the white pieces are placed in two files at the left and
right side of the board (Figure 1).
2. The players alternately move, starting with Black.

3. A player to move must move one of its pieces. A move takes place in a
straight line (up, down, left, right, and all four diagonals), exactly as
many squares as there are pieces of either color anywhere along the
line of movement (These are the Lines of Action).
4. A player may jump over its own pieces.

5. A player may not jump over the opponent’s pieces, but can capture them
by landing on them.

6. To win a player must move all their pieces on the board into one
connected unit. The first player to do so is the winner. The connections
within the group may be either orthogonal or diagonal. For example, in
Figure 2 Black has won because the black pieces form one connected
unit.
Figure 1: Starting board.

7. If one player’s pieces are reduced by captures to a single piece, the
game is a win for this player.
8. If a move simultaneously creates a single connected unit for both
players, the player that moved wins.
Figure 2: Black wins.

Additional Resources:
There are several articles on Lines of Action in the course directory under
the subdirectory Articles. Some other links on the game:
Lines of Action home page
https://boardspace.net/loa/
https://www.chessprogramming.org/Lines_of_Action
U of A GAMES Group Home Page (YL and MONA)
https://webdocs.cs.ualberta.ca/~darse/LOA/
Mark Winands (MIA)
https://dke.maastrichtuniversity.nl/m.winands/loa/

Turn-Ins:
Turn in should include your 1) source code, 2) instructions for
compilation, and 3) a write-up describing 3.a) your implementation, 3.b)
heuristic details, and 3.c) experiences.

Stipulations:
You are responsible for having your search halt at the set threshold depth,
pruning the correct amount of space, and returning correct end game
actions. For the threshold depth, note that one step is one players move
whether it is your move or your opponents, and not your programs and
your opponents countermove.

Assignment 3 CSCE 523 Knowledge Representation and Planning

1. (5 points) What logic rule did the Cheshire Cat use in this argument, and
is it sound?
“To begin with,” said the Cat, “a dog’s not mad. You grant that?”
“I suppose so,” said Alice.
“Well, then,” the Cat went on, “you see a dog growls when it’s angry, and
wags its tail when it’s pleased. Now I growl when I’m pleased, and wag my
tail when I’m angry. Therefore I’m mad.”

2. (5 points) Translate the following Lewis Carroll sentences into a
Propositional Logic Knowledge Base and derive two statements from the
Knowledge Base:
All hummingbirds are richly colored.
No large birds live on honey.
Birds that do not live on honey are dull in color.

3. (5 points) Translate the following into First Order Logic and then convert
to Conjunctive Normal Form (CNF):
According to the Pidgeon: If little girls eat eggs, then they are a kind of
serpent. Alice (who is a little girl) eats eggs. Therefore, she is a kind of
serpent.

4. (10 points) Determine for the following pairs of sentences if they can be
unified and if they can, given the most general unifier, if not discuss why.
a. P(x)  Q(Dog, x)  R(Cat,Dog)
P(Cat)  Q(y,z)  R(z,y)
b. Queen(Hearts) ^ HasProblem(Hearts, y)  Solve(y, Beheading) ^
HasHeadandBody(y)
Queen(x) ^ HasProblem(x, Cheshire)  Solve(y, Beheading) ^
HasHeadandBody(y)

c. ((Son(x,x)  Sister(Mary,Jack))  (Daughter(x,Mary) 
Brother(Jack,Mary)
((Son(Jack,x)  Sister(z,x))  (Daughter(z,f(x))  Brother(y,z)))

5. (15 points) Use resolution with refutation to show that the following three
queries can be inferred from the given knowledge base. At each resolution
step also indicate the corresponding identifier and binding list.
KB: S1: Uncle(John, Jack)
S2: Aunt( Mary, Amy)
S3: Female(Amy)
S4: Brother(Jack, Amy)
S5: Brother(Bill, Jack)
S6: Sister(x,y)  Siblings(x,y)
S7: Brother(x,y)  Siblings(x,y)
S8: Brother(x,y)  Female(y)  Sister(y,x)
S9: Siblings(x,y)  Uncle(z,y)  Uncle(z,x)
S10: Siblings(x,y)  Siblings(y,x)
S11: Uncle(x,y)  Aunt(z,y)  Married(x,z)
S12: Uncle(x,y)  Married(z,x)  Aunt(z,y)
S13: Married(x,y)  Married(y,x)
a. Married(John, Mary)
b. Aunt(Mary, Jack)
c. x(Siblings(Jack,x)  Uncle(John,x))

6. (10 points)Translate the knowledge base from problem 4 into a formula list
for otter and use it to perform a proof by refutation for the queries from
problem 4, and the two below. A copy of the otter executable and
documentation can be found in the course directory. If you want to run
otter on a non-Windows computer you can access the information you will
need at https://www-unix.mcs.anl.gov/AR/otter/.

If a sentence cannot be proven determine what needs to be added to the
knowledge base to make it provable, and would this invalidate the KB.
(Turn-in the otter files, and a copy of the screen output for each query)
d. Brother(Bill, Amy)
e. Uncle(John,Bill) ^ Siblings(Bill,Jack)

7. (25 points) For the following logic problem, a) encode the problem and have
Otter solve it, and b) represent the problem as a constraint satisfaction
problem and solve using the backward algorithm with forward checking.

Link, Zelda, and Ganondorf fought three different evil creatures, the
Octorock, an Iron Knuckle, and a Poe. They fought them with three different
weapons, a bow and arrows, magic, and a sword.

Determine who fought
what creature and with what weapon.
1. Ganondorf did not fight the Octorock.
2. The Iron Knuckle was not fought against with magic.
3. Zelda fought the Octorock.
4. Link has a sword and did not fight the Poe.
5. Zelda fought with the bow and arrows.

8. (25 points) Use the Graphplan planning algorithm to solve the blocks world
planning problem shown in Figure 1 and the Rush Hour problems in Figures
2 and 3. The executable, fact, and operator files for the blocks domain are
in the course directory. You must modify the fact file to solve blocks world
planning problem shown in Figure 1. And write your own fact file and
operator file for the two Rush Hour problems shown in Figures 2 and 3.

Assume there are no trucks only cars of length two. Define actions as a
movement of a car one square north, south, east, or west depending on
orientation. During execution of Graphplan, respond to the prompts to
perform automatic time steps, and information, and for other hit ‘B’ for build
until goals. Note your operator file (for Rush Hour) should be the same for
both problems. The only part that should be different is the initial condition
in the fact file.

Turn in a copy of your domain files, and a printout of the solutions the
planner found. For the Rush Hour problems also include a report on the
search time required and a comparison on the ease of use between
Graphplan, and the search you implemented in Assignment 1. Which makes
the most sense for this domain? And why?

Problem coding recommendations: The ops file will be fairly short; the work
is in getting the fact file correct. For the facts create a car object for each
car, and location for each grid square. Your preconditions should label
whether vehicles can go horizontally or vertically, where the nose and tail
of the car are located on the board, which squares are free, and what
locations are horizontally and vertically adjacent to each other.
Figure 1: Blocks World Planning Problem.

….O1O1….
….A1……
X0X0A1…… <= EXIT
….B1B1….
…………
…………
Figure 2: Initial State Rush Hour Problem 1.
O1P1P1Q1Q1B1
O1….C1..B1
..X0X0C1…. <= EXIT
……D1….
……D1….
…………
Figure 3: Initial State Rush Hour Problem 2.
A
C
D
B
A
C
D
B
Start State Goal State

Assignment 4 CSCE 523 Uncertainty and Machine Learning

1. (10 points) Bayes Rule: Seventy percent of the aircraft that disappear in
flight through the Bermuda Triangle are recovered (P(r) = 0.7). Sixty
percent of the recovered aircraft have an emergency locator (P(e|r) =
0.60).

Unfortunately, 90% of the aircraft not recovered do not have such a
locator. Suppose that an aircraft with a locator has disappeared. What is
the probability that it will be recovered (P(r|e))?

2. (20 points) Shown below is a Bayes network representing the risk of
flooding sewers (FS) in a city as dependent on rainfall (R), falling leaves
(FL), thunderstorm (TS), and autumn (A). Use the conditional probabilities
below to determine the conditional probabilities of a thunderstorm for the
observable scenarios FS  A, FS  A, FS  A, and FS  A.
FL: P(FL | TS  A) = 0.8
P(FL | TS  A) = 0.2
P(FL | TS  A) = 0.05
P(FL | TS  A) = 0.01
R: P(R | TS) = 0.7
P(R | TS) = 0.1
FS: P(FS | FL  R) = 0.5
P(FS | FL  R) = 0.2
P(FS | FL  R) = 0.05
P(FS | FL  R) = 0.01
TS: P(TS) = 0.5
A: P(A) = 0.33
Thunderstorm
(TS)
Autumn
(A)
Rain
(R)
Falling Leaves
(FL)
Flooding Sewers
(FS)

3. (20 points) Link sees Sheik on the horizon. Sheik is fighting with magic and
has all her hearts. Link wants to determine Sheik’s potential for defeating
the enemy and whether he should enter the fray.

Shown below is a risk analysis Bayesian network that Link plans to use. His
risk drivers are the type of weapon in use and the enemy faced. Using
certain weapons on specific enemies can improve effectiveness in battle.

The effectiveness affects the number of hearts the individual has, as well
as the number of bombs they may have left. His question is what is the
probability of success given magic and hearts: Pr(S|W=magic,H=true)?
W: Pr(W={magic,sword}) = <0.5,0.5>
E: Pr(E={poe,darknut,lizalfos}) = <0.33,0.33,0.33>
S:
Success W=magic W=sword
E=p E=d E=l E=p E=d E=l
true 0.2 0.8 0.3 0.8 0.2 0.2
false 0.8 0.2 0.7 0.2 0.8 0.8
Weapon
(W)
Bombs
(B)
Enemy
(E)
Success
(S)
Hearts
(H)
Risk Drivers
Control
Risk Indicators
H:
B:

4. (20 points) For this problem, you need to build a Bayes network in problem
2 in JavaBayes. Using the Bayes network, double check your solutions for
problem 2. And change the table for Rain to:
R: P(R | TS) = 0.9
P(R | TS) = 0.3
What are the conditional probabilities of thunderstorm (TS) given the
observable scenarios FL, and FL. Turn-in your Bayes network file with
your assignment.

5. (30 points) Ahhh, life at AFIT- it’s not just a job, it’s an AI problem! You
wake up to the sunrise after studying for a final all night. You find yourself
amidst hundreds of cargo containers; your watch reads seven forty-five. Uh
oh, you only have fifteen minutes to get to your exam. Unfortunately, you
still must negotiate a maze of cargo between here and the classroom. Which
brings us to the problem: how many steps does it take to reach your
destination?

There are two challenges for you:
Part I:
This challenge requires you to calculate V(s) for a given map, and output
the MEU path. The calculation of the MEU should be conducted by
performing value or policy iteration (your choice).

This is a gridworld in which a cell is either occupied or empty, and the agent
may move, North, South, East, or West, 1 square at a time. The cost of
moving is 1.0. When the agent moves, there is a probability of 0.90 that
they will end in the state that they want to be in and a probability of 0.07
that they remain in the current state, and a probability of 0.03 that they go
backwards a square (i.e. if they were headed West, they would instead go
1 square East). The world is fully-observable, the agent knows where its
location and the locations of the goal and obstacles.

Part II:
Hearts S=true S=false
true 0.9 0.2
false 0.1 0.8
Bombs S=true S=false
true 0.8 0.2
false 0.2 0.8

For this challenge, solve the same problem using reinforcement learning.
Use TD(0)/SARSA, and TD(). Solve the problem for V(s) not Q(s,a). The
values should converge to those close to Part I.

Turn-in should include your code (no language stipulation), and a write-up
which draws comparisons between the solutions. Include a results graph
indicating the path length vs iteration (plot based on the same start location,
and curves should go down). Testing should use world sizes of 25×25 and
50×50.

DataFile Format Example:
3 3
O O O
G V V
V V V
World x size, World y size
Each map location either O – obstacle, V – vacant, or G – goal
Have the agent start location be manually inserted or randomly generated
at the users request.

Take a look at using the BURLAP (https://burlap.cs.brown.edu/) library as a
starting point. You will need to install maven and hook it into your IDE.
Turn-ins: a write-up of your solution, with a graph comparing the
convergence between approaches. Be sure to discuss your parameter
settings, and how you identified the values you used (a parameter sweep
is not unwarranted).

Reinforcement Learning Review
For this problem, you need to learn the best state values for each state in
the domain. In order to learn these functions, you will need to make use of
TD-. In order for the weights to be learned you will need to run your
program several times.

To learn the state values using TD- you will need to consider the problem
as a set of state positions s0, s1, …, sT. The reward for the goal position sT
can be your choice, I suggest 100.
The theoretical/forward view of TD() evaluates the target values for the
states s0, s1, …, sT-1 as
      
 

 

   
1
1
arg 1 1 1
T t
k
t
T t
t k
k
t
t et V s   V s  V s
Where t is the index of the state, and  is a value between 0.0 and 1.0.
The issue with applying this representation is that it is a forward view and
needs to be converted to an implementable backward view. The way this is
handled is by adding the concept of eligibility traces, e(s), (per Sutton)
which maintain the reward decay. The resulting algorithm is shown in Figure

2.
   
  
 
  
 
until is terminal
1

For all
1

Take action , observe reward, , and next state,
Repeat (for each step of game)
Initialize and 0
Repeat (for each game)
Initialize arbitrarily and 0,
arg
1 1
1 1
t
t et
t t
t t t
t t t
s
t t
e s e s
V s V s e s
s
e s e s
r V s V s
a r s
s t
V s e s s S
 

 
 
  

  
 
 


 
Figure 2: Backward view of TD().

Note that in the backward view of TD() that all of the states and all of the
eligibility trace state values are updated every single step of the learning
iteration. In the interest of ease of implementation, we will assume that the
only eligibility trace states we must track are those states that we visit
during the course of an iteration (i.e. start to goal) and that all other
eligibility trace state values are 0. This means that we must only track and
update the states for each game. The reason that this assumption works is
that in most cases, the decay values cause e(s) to decrease at a high
enough rate that repeat visits to the state within a short time frame are
unlikely.

Java Code Stub Reading a maze file:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
// This section merely parses the input file into an
array.
char x[] = new char[1]; //temp variable used for
file parsing
// open maze file
// one can change the maze file here (make sure
they are in the same directory)
File myFile = new File(“maze1”);
FileReader fileReader = new FileReader(myFile);
BufferedReader reader = new
BufferedReader(fileReader);
String line = null;

line = reader.readLine(); // read first line

String[] valueStr = new
String(line).trim().split(“\\s+”); // split into two
integers

int length = Integer.parseInt(valueStr[0]);
int width = Integer.parseInt(valueStr[1]);

newMaze = new String[length][width]; // create
maze array

//System.out.println(length + ” ” + width);

//Parse rest of file into an array
for (int i = 0; i < length; i++) {
for (int j = 0; j < length; j++) {
reader.read(x);
newMaze[i][j] = new String(x);
}
reader.read(); // carriage return
}
reader.close();