Name | Skills | |
Jim Grolle | Sound Design, Backend | |
Navid Armaniyar | Game Design | |
Mika Neumann | Frontend, Sound Design |
Task | Est. Time | Priority | Actual Time | Done? |
Creating base framework in Godot | 14h | |||
Text saving system | 4h | 1.1 | 4h | Yes |
Displaying card system | 4h | 1.2 | 1h | Yes |
Stat system | 4h | 1.3 | 0.5h | No |
Choice System (buttons, doors) | 2h | 1.4 | 0.5h | No |
Porting Offline → Online | 2h | |||
Port LibreCalc doc. to Godot | 2h | 2.1 | 0.5h | no |
Music Setup | 16h | |||
Create base loop | 4h | 3.1 | ||
Make interactive elements | 12h | 4.1 | ||
Visual polishing | ||||
Shader as visual filer | 6h | 3.1 | ||
Choice room polishing | 2h | 4.3 | ||
Most basic setup | 4h | 3.2 | ||
Creating (good) card design | 6h | 4.2 | ||
Multiple Choice rooms | 6h | 4.4 | ||
Recording story | 6h | 3.3 | ||
Playtesting & Balancing | ??? | constant process | ||
Total | 62h | |||
1 | 14h | |||
2 | 2h | |||
3 | 20h | |||
4 | 26h |
What is already done:
This project is designed to both be fun an educate the player about what differentiates current models from strong AI.
Today I discussed the idea with Niclas Rautenberg. According to him the fear of a strong AI is oftentimes abused to distract from the more apparent dangers that current models have.
This is a consideration important for the narrative of the story because as of current the idea is to leave the player with a better Idea of what keeps current AIs from being strong. This makes sense and will remain an essential part of the story. But we must avoid just playing down concerns. The conclusion of the game must not be that an AI could not easily become a strong intelligence and is therefore not dangerous. The conclusion should rather be that an AI can not easily become a strong intelligence and is therefore still dangerous, be it in a different way. If the game does not want to be misleading it must show the dangers of a weak AI. Personal Information can be leaked, political misinformation spread and the line between truth and lie blurred.
The game intentionally makes the player attempt certain things without making clear if the AI actually succeed or just believes to have done so. This allows it to do things like “hack the governments computers” or ”Send out the Army."
This is intended to show the player the imperfectness of the AIs senses. It is extremely difficult for it to differentiate between real things and things it just has generated. If it analyses historical military data and tries to influence a war it might easily get to the conclusion that the best course of action would be to set a trap for the enemy, since this strategy is commonly used in the database. However the AI might not have the resources to do this. It probably does not even have an army in the first place. From the AIs perspective this is difficult to understand. It doesn't have a sensor that is telling it “you have no army” it can only assume from large databases that never directly talk about it.
Communicating this to the player is difficult. There is a high risk of the Player misunderstanding the narrative and believing the AI to be fully capable of doing everything it claims to do. This makes the player feel like they are controlling a powerful Entity that quickly gets the control over the entire planet instead of feeling like there are in control of a very strong program that is incapable of using its own power because of its inability to distinguish between what is real and what is not.
If the player misunderstands this the hole story is lost on them. Yet of all my (5) play testers of the card game not a single one even had the idea that the AI might just be hallucinating its success. The closest reaction I got was people seeing that sometimes the story includes completely false facts. (Like assuming that Beethoven ninth was written by Mozart.)
Still this shows that the narrative design has an urgent need for improvement here. The game cannot be published if players misunderstand its story to a degree where instead of being educated about AI they are introduced to new misconceptions. This challenge is difficult because the idea of an all knowing and all powerful AI seems to be so strong in the heads of players that even obvious signs of incompetence are easily overlooked. Our challenge is to be direct enough that misunderstanding is difficult while being subtle enough that the player feels like they are discovering their weaknesses on their own. Our challenge is also to tell a story with an extremely unreliable narrator (the AI) and still give the player an understanding of the real impact their actions have.
Possible Solutions:
I personally like the follow up Events most. I think they would also add a lot to the players impact and make decisions more meaningful. They would also be comparatively easy to implement.
Addition Information in end Screen would require quite complex additional Systems without adding to much in my eyes.
A Guessing mechanic would completely shift the games design to be more complex. I personally like the idea of having a very simple gameplay loop of only making yes and no decisions. The guessing mechanic also has problems if it comes to detail. The AI does not even consider if it has an Army or not. So why should the game just tell the player that this might not be true. This might lead the player to not understand how oblivious the AI truly is. It would also create an additional barrier between the player and the AI. If the player constantly evaluates the AIs thoughts it will feel more like you are controlling an AI instead of being an AI. I don't really like the guessing Idea.
That all being said these solutions don't contradict each other and could theoretically be implemented together. For now I would prefer to attempt only introduce the Follow up Events and solve the rest of the problem with clever storytelling. If this ends up not working I could still implement additional measures later.
The AI constantly gets things wrong. Since it has no good way of telling what is correct in the internet and what isn't it will often believe common lies or misunderstandings. For example it overrates the amount of pornography present in its own data (the public internet.)
If we want to be realistic we need to choose errors the AI would actually likely make, which often are the same errors many humans tend to believe without questioning. This leads to the issue that players could not understand that this is false information and instead believe the lies we put into the story.
Possible Solutions
I don't like any of these options. Adding more wrong information to a degree where the player must constantly encounter is a heavy restriction for the writing that will be annoying to work with.
Making misinformation more blatant makes it also less believable that the AI considers it correct.
Removing misinformation a terrible decision for the story, since it entirely resolves about the AIs inability to tell what is right and what is wrong. The AI has to make mistakes for this to be believable.
I am not yet sure how to solve this. For now I intend to write the story first and then see how big of a problem this actually will be and than apply patches later.