Due Date:        March 26, 2018  23:59 hoursProvide a short answer to each of the questions in Parts I, II, and IIl.Part I: Policy Interpretation This project requires Python 2.7.  Source: http://c322.wirsz.com/1/reinforcement.zip  1. Run this simulation, press enter for the resulting policy and describe what is happening. Run each of these simulations more than once and see if you have a consistent result. What does the final policy mean in plain English?  python gridworld.py -a q --livingReward .5 --episodes 5 -s 1002. Run this simulation and describe what the final policy means: python gridworld.py -g CliffGrid -a q  --livingReward -.5 --episodes 50 -s 100 3. Run this simulation and describe what the final policy means. Look at the q values as well as the policy: python gridworld.py -g CliffGrid -a q  --discount .1 --episodes 10 -s 1004. Run this simulation and describe what the final policy means: python gridworld.py -g MazeGrid -a q  --discount .9 --episodes 30 -s 1005. Run this simulation and compare #4 and #5, which technique seems to be more efficient?  discount or living penalty? Why?python gridworld.py -g MazeGrid -a q --livingReward -.1 --episodes 30 -s 1006. Noise refers to the percent chance when an inadvertent consequence occurs. Run the following simulation which results in a nonoptimal policy.  Reducing the noise to zero will return the optimal policy. What is the maximum amount of noise that will still result in an optimum policy? python gridworld.py -a value -i 100 -g BridgeGrid --discount 0.9 --noise 0.27. If you change the discount rate in the previous question to 1.0, does this increase or decrease noise tolerance?  Why? What is the new max noise that will still result in the optimum policy?8. If you use only living penalty, does this increase or decrease noise tolerance and why?   Feel free to experiment with larger "living reward" penalties: python gridworld.py -a value -i 100 -g BridgeGrid --livingReward -.1 --noise 0.2  
Due Date:        March 5, 2018  23:59 hourshttps://codelabs.developers.google.com/codelabs/tensorflow-for-poets/ImplementImplement the "TensorFlow for Poets" guide by following the steps of the training guide with the following modifications:REQUIREMENTS:Use the inception architectureDo not restrict the training steps or specify additional training steps (Use the tensorflow default only)Download domestic rabbits training images instead of flowers: http://c322.wirsz.com/1/8rabbits.zipTrain the model, find 3 rabbit pictures and 1 "unknown" picture on the Internet and analyze them (suggestion: look for images of those 8 rabbit breeds but do not use pictures from the training set)Figure out how make the system return a certain percentage for "unknown" when given images that don't fall into the 8 rabbit categories (see screenshots for examples)Tweak the settings to achieve better numbers for the rabbit pictures. Increasing the number of training steps or downloading/removing training images are NOT valid options. Tell me what you did and provide screenshots from analyzing your same 4 picturesSubmit the 3 rabbit pictures you chose, a short document describing the Python commands you used for training/analysis and how you overcame the challenges in this assignment, and multiple screenshots of the statistics (at steps #4, #5, and #6)                                                                                        Last Revised: February 21, 2018
Due Date:        February xx, 2018  23:59 hoursUse numerals to represent in Prolog the natural numbers. The constant z (for zero) is a numeral,  and if X is a numeral then s(X)  is also a numeral (where the function s  represents ++).   The  numerals corresponding to 0, 1, 2, 3, etc. are z, s(z), s(s(z), and s(s(s(z))), etc.  Add code comments to show your understanding and demonstrate with test cases.Define  a number of predicates to interact with these numerals. For example: plus(z,Y,Y).plus(s(X),Y,V) :- plus(X,Y,U), % remove a function from X till reaching zero s(U) = V. %  add a function to V each timeREQUIREMENTS: Create an implementation of natural numbersImplement plusImplement equalImplement less thanImplement greater thanImplement minusImplement multiplicationImplement mod/remainderImplement factorial Extra credit: implement number so that number (X, N)  is true if X  is a numeral corresponding to the decimal integer N. For example, number (s(z),1) is true and number (s(s(s(z))),2) is false.Extra credit: implement e_number, so that e_number(X, Y)  is true if Y is a phrase in English for positive natural numbers between zero and 100, representing X.  For example:e_number(s(s(s(z))), three) returns truee_number(X, twenty five) returns X = s(s(s(...(z)...) Submit a working .pl file that demonstrates the cases above with a significant number of code comments to explain the operation of each predicate and also provide test cases.                                                                                         Last Revised: January 21, 2018
Due Date:        February 19, 2018  23:59 hoursPick 6 of the 9 predicates and implement them in Prolog.  Add code comments to show your understanding and demonstrate with test cases.REQUIREMENTS:Find the last element of the list.  last([a,b,c,d]),Find the second last element of a list.  nextlast([a,b,c,d]),Find the K'th element of a list.  kelement([a,b,c,d,e,f,g],5),Find out whether a list is a palindrome. palin([a,c,c,b,a]),  palin([a,b,c,c,b,a]),Flatten a nested list structure.  flatten([a,[b,[c,d],e]]),Eliminate consecutive duplicates of list elements.  compress([a,a,a,a,b,c,c,a,a,d,e,e,e,e,f]),Drop every N'th element from a list. drop([a,b,c,d,e,f,g,h,i,k],3),Remove the K'th element from a list.  remove([a,b,c,d],3),Insert an element at a given position into a list.  insert(e,[a,b,c,d],3).You may run a single unified test case such as "s."    s :-    last([a,b,c,d]),    nextlast([a,b,c,d]),    kelement([a,b,c,d,e,f,g],5),    numelements([a,b,c,d,e,f,g]),    reverse([a,b,c,d,e,f,g]).Also you may use write & new line statements inside your predicates to show the output:... , write('4. Number of elements: '),write(I),nl.Submit a working .pl file that demonstrates the cases above with a significant number of code comments to explain the operation of each predicate and also provide test cases.  Make sure you eliminate all singleton variable warnings.                                                                                        Last Revised: February 19, 2018
Due Date:  April 7, 2018 23:59 hours Program requirements:Download the source code and content files from canvas and set up the project on your systemPick a theme for your scene, find and create models that match your theme.  Create 4 or more modeled treasures and place them in various locations on the map. One treasure must be placed within the "barrier" walls close to the inside corner (vertex (447, 453) position (67050, 67950)) but not within any wall “brick” bounding sphere.  Generate your terrain by modifying TerrainMap.cs and then use a paint programs to gaussian blur effect to average or smooths the height and color textures. Modify the player object and NPC to better on the top of the terrain by interpolation with the Vector3.Lerp(....) method.  You must have an ‘L’ key toggle Lerp or default terrain following on/off.  This way you (and I) can see/test the effect of Lerp. The NPAgent currently follows a path following algorithm.  The 'n' key should toggle the agent into a treasure-goal seeking state.You must a very detailed but concise description of EVERY feature implemented and exactly how you implement them.If the NPAgent detects a treasure within 4000 pixels, automatically treasure hunt.The number of treasures you have collected should be shown in the inspection frame. The number of treasures the NPC has collected should also be shown.Design an exploration path that takes the NPAgent within "detection radius" distance of all treasures.  This path should be reversible: beginning of each simulation and direction should be randomly picked.  Direction should be shown in the inspection frame.The NPAgent should be made to test collisions, just like the player.When all treasures are found, the NPAgent should stop moving.The NPAgent must use a fixed collection of sensors to detect objects in its path.(Z)Dogs should flock around the player with 4 levels of packing.(0%/33%/66%/99%)  The current level of packing should be shown in the inspection frame.Submit a zip archive of your project directory (AGMGSK) to Canvas. The project name, class, and email addresses of all group members must be stated in the beginning of your Program.cs file. I strongly recommend internal comments for every change made signed by the individual who made them.  This is for your benefit, your grade will be identical to the team's grade regardless of dozens of code comments or zero code comments.                                                                                                  Last Revised: March 25, 2018Details:Project setup: see lecture slides or ask for assistance if you have any problems compiling and running the initial project code.Scene Design: You should first decide what the theme of your scene will be. All scenes must have a rolling hills (no really sharp inclines). For example you could have a scene with Stonehenge like ring structures, a desert with pyramids, the American plains with teepees, or a city scene with blocks of simple rectangular building. You can populate your scene with models that you load. These models should be generated with a modeler of your choice, that can save (export) *.fbx files, or direct x files (*.x). The direct x files should be triangulated and saved with material and normals. My advice is to keep it simple for the first project. If you use AC3D's File | export | Direct X, in the export dialog select "right handed coordinate system". Some of your models can be downloaded from the web (they must be scaled appropriately) You do not need a lot of models. Do not spend too much time on models and scene design. Free student versions of 3D Studio Max, Maya, and Blender exist. 2013 FBX converter will convert .x and .3ds models to .fbx files. A 14 day trial version of AC3D also may be downloaded, full versions are installed on the lab machines in JD 1618.Treasures: 4 or more modeled treasures scaled between 100 to 300 pixels in width, height, and depth. One treasure must be placed within the "barrier" walls close to the inside corner (vertex (447, 453) position (67050, 67950)) but not within any wall “brick” bounding sphere.  AGMGSK is scaled so that 4 pixels = 1 inch. The spacing between vertices in your terrain will be 150 pixels. Thus your terrain will range from 0 to 76,800 in the X and Z dimensions. The origin of the scene will be the left, back, corner of your terrain when viewed from above (+Y). For each step (1) and Agent takes the step size is 10. Terrain: Once you have a theme you need to generate your terrain by modifying the TerrainMap.cs.  You should use the Brownian-Motion terrain generation algorithm presented in lecture to create height values. You should have some nearly flat terrain the lower, "testing", quadrant of the scene (X and Z > 38,250). The area of the "walls" should be flat ("relatively flat"). Think about how your step and radius parameters affect the dispersion of height values. Color Table:   You should design a color table that will map height values into colors. Since we are using textures to hold the height values that range from 0 to 255. For example, you could have a different color for every interval of 25 or 50 heights. For example, height values of 0 could be a "tan or sand like color", and 1 to 25 could be a tan-green, or perhaps a yellow-green, 26 – 50 could be a darker green, above 225 you might have white for snow. You should add some noise to your vertex color values.Smoothing:  You should smooth your heightTexture and colorTexture.  You can do this with a paint tool like paint.NET or Gimp to add Gaussian blur effect to your textures.  This will make height and color transitions smoother. Put the heightTexture (png or xnb) and colorTexture (png or xnb) files in the appropriate Content directory of your P1 application (AGMGSK project) after smoothing.Terrain Surface: You need to modify the starter kit so that the player object and NPAgent object move better on top of the terrain. In the distribution Agents and Pack object3D's are set at the surface height of the minimum (X, Z) vertex for the surface they are on (“upper left corner of quad holding two surfaces”). Terrain following should be done by interpolating with the Vector3.Lerp(....) method. You must have an ‘L’ key toggle Lerp or default terrain following on/off. This way you (and I) can see/test the effect of Lerp.Path Following: The NPAgent currently follows a path following algorithm.  The NPAgent's update method should be modified so that it moves in one of two states: "path-following", or “treasure-goal”. In the treasure-goal state, the NPAgent moves directly towards the next closest unfound treasure until it "tags" the treasure. When the user presses the ‘n’ keyboard key the NPAgent state should change from path-following to treasure-goal state. The NPAgent should remember what its current path-following goal is, so it can resume path- following. The NPAgent in treasure-goal movement should always go to the closest untagged treasure. When the NPAgent “tags” a treasure it automatically switches back into path-following mode and moves towards its next goal. The NPAgent finds 1 treasure (if one is not tagged) for each ‘n’ press. Either the Agent or NPAgent can "find" or "tag” a treasure if it gets within 200 pixels of a treasure. Once a treasure has been found it should be "tagged", so the treasure is no longer active. The treasure’s display should indicate its “tagged” (non-active) state. The Agent that tagged the treasure increases its treasure count. Your program should display the number of treasures found ("tagged") by each agent in an Inspector pane info pane. Consider placing the treasures in the flat "testing" area where the Player is loaded. This way you can see and test your program quickly without having to wait for the NPAgent to move relatively long distances. The simulation does not end when all treasures are tagged. The program ends when the user closes the window or presses the 'esc' key. Treasure Detection: The NPAgent moves under two states: exploration and treasure-directed. When the NPAgent starts it has a predetermined exploration path that traverses the scene and will detect all treasures that exist in the scene. When the NPAgent detects a treasure it switches to its treasure-directed navigation algorithm. When the NPAgent has “tagged" the treasure it returns to its last position in its exploration path following algorithm and continues exploration.  If the player does not move, the NPAgent should collect all the treasures and stop moving at the end of its exploration path.New Exploration path: You will need to design an exploration path that takes the NPAgent within its treasure detection radius distance (4000 pixels) of all treasures.  This path should be reversible: beginning of each simulation the direction should be randomly picked.  Obstacles should be in this pathwayA running total of the treasures collected should be reported on the [I] Inspector window and only the NPAgent stops moving, the simulation continues running.The NPAgent should be made to test collisions, just like the playerWhen all treasures are found, the NPAgent should stop movingObstacle avoidance. With obstacle avoidance the NPAgent moves toward its next goal location from its current location. As it moves it uses a fixed collection of bounding spheres "sensors" to detect obstacles in its path. When there is a sensor collision the NPAgent moves to avoid actual collision with the object. When there is no sensor collision the NPAgent resumes movement towards its next goal location. If you choose obstacle avoidance, your NPAgent must not get “stuck” and your approach must not be tailored to your scene only – it should work on other exploration paths. We will adopt a very simple test; the reverse of the exploration path. Your approach should work in either exploration path direction. If you use Obstacle avoidance you must submit a FSM diagram for your algorithm. For obstacle avoidance the collision-sensors should be visible (as BoundingSpheres). Collision-sensors could be toggled visible or not visible with the 'Z' key.Complex path to treasure. Your project must have one complex path to a treasure. This is a path to a treasure inside the boundary “walls” of the AGXNASK distribution. The treasure can't be inside the bounding sphere of any "brick". I recommend using vertex (297, 451) for X,Z positions of (44,550 and 67,650). Of course the adjacent bricks must also be outside of the treasure's bounding sphere. My suggestion is for a treasure with a bounding sphere ˜ Player's bounding sphere. Dogs should flock around the player with 4 levels of packing.(0%/33%/66%/99%)  Dogs always move forward and move in a leader based quasi-flocking algorithm. The player is the pack’s leader. Dogs do not do obstacle avoidance; but they do test collisions. If the player does not move a dog can get “stuck” until the player moves. Each dog has a probability that they will "pack" or explore. There should be 4 levels of pacingking: 0%, 33%, 67%, and 99%. With 0% none of the dogs will pack. With 67%, a dog will pack approximately two thirds of its updates. The level of packing should be toggled by pressing the "P" key (toggles between levels). Display the packing probability in one of the info lines available. At level 0 the dogs should explore (wander) away from the player if the player stops moving; this is the behavior of the dogs in the AGXNASK distribution. Changing the level to 99% should cause the dogs to return to the player and position themselves near the player. A good test of packing is to not move the player and to select different levels of packing. You can develop your own packing variant but consider the ideas of alignment, cohesion, and separation forces. There is no need for a gap in the separation arc behind the leader. If the player is not moving and is in an open field at 67% packing the dogs should “mill around” the leader.  Documentation: You must a very detailed but concise description of EVERY feature implemented and exactly how you implement them, updated for Project 2, including new classes, methods, and algorithms.No late submissions are accepted unless a request for extension is granted.  See the syllabus for further details. Partial/incomplete projects should be submitted on the due date. This class is project oriented and subsequent projects depend on material designed in prior projects.
Due Date:  February 22, 2018 23:59 hoursThis project and the 2nd one will utilize the AGMGSKv9 distribution. This is an open source 3D simulation developed by Professor Barnes.  The base project is described in greater detail in AGMGSK.pdf.  Working with this project requires Microsoft Visual Studio 2017 and MonoGames v3.6.  Linux users may utilize MonoDevelop instead of Visual Studio.   Program requirements:Download the zipped up source code & content files from canvas and set up the project on your systemPick a theme for your scene, find and create models that match your theme.  Create 4 or more modeled treasures and place them in various locations on the map. One treasure must be placed within the "barrier" walls close to the inside corner (vertex (447, 453) position (67050, 67950)) but not within any wall “brick” bounding sphere.  Generate your terrain by modifying TerrainMap.cs and then use a paint programs to gaussian blur effect to average or smooth the height and color textures. Modify the player object and NPC to better move on the surface of the terrain by interpolation with the Vector3.Lerp(....) method.  You must have an ‘L’ key defined to toggle Lerp or default terrain following on/off.  This way you can see the effect of Lerp. The NPAgent currently follows a path following algorithm.  The 'N' key should toggle the agent into a treasure-goal seeking state.You must write a detailed but concise description of EVERY feature implemented and exactly how you implemented them.Teams:  You are encouraged to work in teams (1 to 3 members) on all of these projects. There is one project submission per team and all team members have the same project grade. Feel free to discuss problems with other teams and look at other team's code, but do not copy from other groups.  Project issues will appear as questions on exams, so it is essential for you to understand the entire project, not just the sections you personally worked on.Submit a zip archive of your (2) project directories (Terrainmap & AGMGSK) to Canvas. The project name, class, and email addresses of all group members must be stated in the beginning of your Program.cs file. I strongly recommend internal comments for every change made signed by the individual who made them.  This is for your benefit, your grade will be identical to the team's grade regardless of the number of code comments you make.                                                                                                  Last Revised: January 15, 2018