Ph.D. - August 2021

week 1

  • WSL does not run Unreal apps. Works on Ubuntu native.
  • No luck running Unreal apps on WSL

WSL needs X11 to run graphical apps

Steps:

Open AI GYM

  • Get started -> https://gym.openai.com/docs/

  • observation (object): an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game.

  • reward (float): amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward.

  • done (boolean): whether it’s time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.)

  • info (dict): diagnostic information useful for debugging. It can sometimes be useful for learning (for example, it might contain the raw probabilities behind the environment’s last state change). However, official evaluations of your agent are not allowed to use this for learning.

Retro Gym

https://github.com/openai/retro

Gym Unreal CV

https://github.com/zfw1226/gym-unrealcv

Compiling Unreal WIN to Linux

Vulkan


week 2

  • Meeting with Fabio
  • Present the plan for testing in Humanitas

Florian (humanitas)

  • I’m not sure it makes sense to define use cases as missions if you want to test the inner-working of the simulator.

    • Because the Go from A to B logic happens inside the emulated drones, not within the simulator.
    • What you would want ton test within the simulator would be more stuff like Spawn robot model A at location (X,Y,Z).
  • Real world scenario:

    • Some sort of mission manager running in a companion computer send control commands (e.g. move to that GPS position) to the PX4 Autopilot running on a dedicated board.
    • Then the PX4 Autopilot power the motor with the adequate voltage so as to move the robot accordingly to to command.
    • The PX4 also read the value collected by various sensors to move precisely.
    • Now, what the simulator does is to virtualize the motors and the sensors.
    • So anything that as to do with moving to a particular location or executing a mission is outside of the simulator scope.
    • Also, mocking control inputs sent by the PX4 to the simulator is going to be more complicated that using the PX4 directly.

So you have two option here (i.e. 2 scopes):

  1. You test only the simulator feature (collision, rendering, network connection, spawning ..) and you can just move the robot around with a simplified controller. That controller move the drone but bypass that PX4-UE4 interface (so you cannot really test missions, only the simulator features)
  • This can be seen as unit test (it test only the UE4 project)
  1. You test the missions (and the simulator feature actually). In that case, you use a PX4 Autopilot and the mock replace the “mission manager”.
  • This is more an integration test.

  • You can totally mimic the mission, but you will be testing your mock. So not very helpfull

    • Because you don’t need to mimic a mission to do unit testing on collision for instance
  • Which one is more helpful?

    • It might not be very usefull to test the engine itself.
    • That said we will need some unit test for the simulator nonetheless.
      • Unfortunately, those unit tests won’t be able to fill your thesis
  • A unit test would be “Start a server and a client. Does the client connect to server ?”.

  • Then you can do variation of the test: what happen when the client and the server are on the same machine ? different machines ? multiple clients ?

  • Question: How can you be sure the agent is behaving accordingly inside the game engine?

    • But this comes down to the case 1 where you don’t test the mission, only the simulator
    • (because it makes no sense to test the mission without the autopilot (and the actual dynamics) of the robot)
  • In definitive, I think you should consider that 1. scope.

    • Because it’s not really interesting to do integration test without having the components tested before.
  • In the 2., you end up doing everything at once (because unit tests does not exist yet)

  • [ME] Can’t we load a scenario with a mission and execute it? If we can, we can train agents to do it by themselves

    • Once again, I’m not sure to understand what you train the agent for. Let say we add a lidar feature and we want to test it. What is the agent doing to test it ?
  • [ME] So, instead of writing a script or test manually, we use agents that “learn” how to perform the actions. During the execution, we can spot problems like collision, performance, or even not reaching the goal, how long it takes, etc.

    • I really don’t see how a trained IA could generate test case for a new feature he knows nothing about.
    • Also, if you need to create a “training program” to feed the IA with for each feature, you are better off writing the test yourself no ?
  • [ME] The idea is to explore more the scope of a game because either manually or with scripts you can’t cover everything

    • I understand, but I really don’t know how you could achieve that, and not even what you could achieve
    • Because this is not a game where you can train the IA to complete all quest and therefore test the critical path of the game for instance.
    • The use-case (the gameplay) are specific to each user.
    • So the only thing you can actually test are wether or not the pawn (rover, drone) are moved properly by the engine, does the engine render the proper visual, … But to be honest, this is quite limited as most of it is already being tested by Epic’s teams.
    • Also, for performances, UE4 has some profiling tools available I believe.
  • Really, the question you should try to answer first is: what needs to be tested ?

    • On a practical point of view, we really needs small unit test to ensure new features doesn’t break the old ones.
    • But those are not interesting for your PhD.

week 3

Fabio meeting

about the humanitas meeting

  • The dificulty is to generate the scenario that really test the app

  • Generate blueprint or something based on existent

  • Forget it and focus on the thesis

  • Game engines

  • Understand inputs and outputs

  • Create a ML model

  • Validate

  • Automation of tests improve the testing experience


week 4

  • Different heuristics for regression test (with dashboard)
  • Q: what to test?

Yann’s meeting

  • Draw the loop between the designers and testers and how the agents can solve it
  • Understand gym-unrealcv
    • How the communication with Unreal works
    • Modify the actions
    • Modify the reward function
  • Add new environments
  • Add new heuristics
  • Add profilings