開發者分享遊戲設定方式的思路和可行性的評估分析

本文原作者:Alexander King 譯者ciel chen

這個月初,在GDC的ALT.CTRL展上,我和Sam Von Ehren以及Noca Wu展示了我們共同設計的可選控制器遊戲。Alt.CTRL是一個展示那些擁有獨特控制器的遊戲的平臺。在展會上,參加者對我們在情感逃避測試儀(Emotional Fugitive Detector)中使用的設計和開發程序感到很好奇。我想分享一些我們開發過程中的情報來作爲一個迷你剖析,以便以後遊戲設計者們在運用新型輸入形式時能有所借鑑和幫助。所以接下來是設計一個異位面部掃描遊戲(a dystopic face-scanning game)的步驟!

Emotional Fugitive Detector(from gamasutra.com)

Emotional Fugitive Detector(from gamasutra.com)

關於該遊戲成品

情感測試儀是一個雙人協作遊戲,玩家需要兩個人協作戰勝一個可惡的面部追蹤機器人。兩名玩家中的一名要被面部掃描,然後讓他的搭檔能根據他的面部表情選出對應的情緒。如果他們表現得太多,面部追蹤API會識別出來,如果他們表現得太微妙,他們的搭檔可能會選錯情緒。這種技術被稱之爲”不友好科技”、“影響驚人的”還有呃…… “這種科技本身是有點傲嬌……但是這個概念實在是太棒了”!這是一個有關情緒細微差別的遊戲,在這個遊戲裏,人臉既是控制器,也是屏幕。但是並不一定都是那樣開始的……

步驟一:古怪的主意和受到的啓發

去年早些時候,我們在一些地方有讀到過關於面部追蹤的開放源代碼庫。作爲一個遊戲設計師經常會做的那樣,我們覺得這能做出一個非常酷的遊戲。我們本來是想做面部格鬥遊戲的,玩家會對彼此做不同表情來攻擊或抵擋。但是在初步想法階段基本沒有進展,直到我們參加了去年的ALT.CTRL展。看到那麼多令人驚訝的遊戲,創造了各種獨特的親身體驗經歷,這讓我們很受鼓舞,不再停留在想法階段,而開始去真正試着做出點什麼了不起的東西。

步驟二:看看這個想法是否已經被人捷足先登了

所以我們知道我們是想做一個應用臉部識別的遊戲。我們要做的第一件事卻也是我在探索新機制的時候經常做的事:看看是否別人之前就做過了!我認爲這是開發過程中總被疏忽的部分,因爲遊戲作爲藝術形式存在的歷史實在太短了。但是如果你花時間做一些調查,不僅可以避免把類似作品當做創新,讓自己陷入尷尬的境地,還可以在前人的失敗或成功中學習到經驗。

我們這個項目比較小衆。儘管身體追蹤已經運用在很多體感遊戲中了,但我們發現很少遊戲運用了臉部輸入功能(對比較聰明的人來說,他會知道樣例的缺乏可能是個危險訊號)。我們唯一可以找到其應用的地方是在各種高科技演示中或者簡易遊戲中作爲按鈕的替代。像Eye Jumper或者Face Glider的遊戲會讓玩家用他們的臉部進行輸入,但是是以一種非常直接的方式來執行引導或者跳躍。

看到那些別人做的遊戲,這讓我們清楚地明白,我們想要做的是能把面部追蹤和人臉的功效作爲遊戲設計的一部分。比如在VR中,面部動作作爲輸入能做到更好的探索。所以我們想要從表情方面入手,並讓其作爲遊戲的首要玩法。我們的大腦是能夠讀出臉部表情的,但是這很少作爲技術投入遊戲中使用。

步驟三:測試遊戲的技術可行性

現在到了把想法付諸實踐的時候了。最初我們使用的openCV(開放源代碼計算機視覺類庫),這是目前爲止臉部追蹤記錄得最好的開放源代碼庫。於是在使用了多種技術後,檢測照片或者視頻中的臉部點就變得容易了,這跟你在SnapChat裏面的面部濾鏡原理相似。這個強大的代碼庫也讓在Unity中實現插件內容變得十分簡單。然而,這從根本上來說還只是有了臉部探測方法而已;它只給你提供了臉部的識別點,再沒別的了。然而如果你是要在某人臉上裝上貓耳朵(當然這是個很好的目標)那這就夠用了,但我們想要的是探測表情的變化,比如微笑或者皺眉。我們試着構建我們自己的方法來識別這些表情。不過這些好像已經有人完成過了…..我們發現面部動作識別是個挺有意義的問題!不過多虧有人在我們之前探過這條路子。我們最終找到一個名爲clmtrackr的java庫,它擴展了跟OpenCV相似的理論方法,但是已經被整頓爲針對臉部的庫,所以它可以輸出鑑定四種表情(傷心、高興、生氣、驚訝)的置信區間(confidence intervals)[根據一定置信程度而估計的區間,它給出了未知的總體參數的上下限]。

我想要強調的是我們是遊戲設計師,不是研究人員、計算機科學家、視覺識別專家或者其他別的之類的。任何對這塊領域存在真正興趣的人好像都會對我們的快捷方式和修改感到恐懼。然而我們只是想改變一件工具的用途來把它運用於遊戲體驗中而已。

所以這得花些功夫,但我們在一兩個月內把初始技術給運作起來了。我們可以讀出人類面部的四種表情,並將之輸入到遊戲系統當中。儘管一開始我們不是很確定這個技術是否行得通,後來沒花太久的時間它便以相對不錯的水準運轉起來了。不過,只是“相對”而已。

步驟4:無盡的遊戲測試

我和Sam、Noca相識在紐約大學遊戲中心(the NYU Game Center),我們三個一起在那裏獲得了美術碩士學位(MFAs)。在那裏,遊戲測試是遊戲設計方法的核心原則——總之就是要不斷地進行測試。交互系統幾乎無法做出預判,所以有些在你腦袋裏可能很棒的東西,在人們對其玩法不熟悉的情況下,仍舊有可能夭折。項目會舉辦一週一度的遊戲測試之夜,也被叫做遊戲測試星期三。學生們,員工們還有本地開發者們聚集在一起測試遊戲以及得到大家的反饋(據說大部分大學生是奔着免費披薩去的)。

player sin park forepart(from gamasutra.com)

player sin park forepart(from gamasutra.com)

我們幾乎每禮拜都去,持續了好幾個月,去測試不同的遊戲機制。這是學習技術支持的寶貴機會。我們的遊戲測試真的很難搞。儘管它在理想情況下運作得很流暢(比如當我們自己在做測試的時候),但當進行真人測試的時候,它的侷限性就暴露出來了。距離百分百的準確度還差的遠着呢,而且對目標亮度變化尤其敏感,如果有人頭部動了一下,甚至非常細微地動,那也都玩完。

隨着我們的初始原型隨着可行性測試進展爲遊戲原型,在過程中我們也絞盡腦汁用盡方法做一款有趣的遊戲。這些早期原型重心在於被計算機讀取。它會告訴你它要的是什麼表情,然後被成功識別的第一名玩家就算贏。我們一直試着去用這個遊戲的設計來彌補或者遮掩技術侷限性。我們測試過框架敘述來解釋“人工智能(AI)”這麼任性的原因,或者用回合制機制來隱藏緩慢的識別速度。在實驗想法時發現行不通的時候是非常叫人沮喪的!但是我們堅持着,不斷迭代着。

步驟五:在沒有好點子之前別停下

意識到誤差幅度檢測可以作爲遊戲設計的一部分而不僅是判斷依據,這對我們來說這是一個轉折點。一名在實體遊戲裝配方面經驗豐富的遊戲設計者和藝術家Matt Parker告訴我們,他做過一個遊戲,致力於讓玩家躲避被微軟Kinects識別。玩家必須扭曲他們的身體凹出詭異非人類的造型來獲得勝利。這毫無疑問是個天才的想法:如果玩家努力避免情緒被我們的掃描儀探測出來,而不是費精力順着有缺點的程序來玩遊戲,那我們可以圍繞這樣的點來構建一款遊戲。把漏洞轉爲特徵實在是個很棒的方法,這巧妙地創造出了超級棒的遊戲設計。

步驟六:自下而上的設計

遊戲成品可能會讓人感覺它是自上而下設計的:一個異位識別的未來,機器與情感的對抗,掃描孔等等。但是事實上,所有的實體設計都是完全在遊戲設計需求的推動下形成的。你爲什麼想嘗試着小心翼翼地傳達情緒?因爲情緒明顯是違反規則的!我們要如何才能讓玩家不動他們的頭以及搗亂追蹤呢?讓他們把臉對着一個洞貼着不讓他們動!我們要如何確保持續的打光呢?把相機包在一個盒子裏!

實體和敘述性設計的每一部分都是爲了遊戲而設計的,他們以一種有機的方式組合在一起。那確實是我項目裏最引以爲豪的一部分!儘管這個盒子的實體設計在不同構建裏有大幅度的改進,但重點在於現在它只居然是一個廢棄的紙箱做的機箱!

步驟七:永不停止地打磨和遊戲測試

所有事,到了後來就是要拋光打磨。我們加入了聲音,改進了箱盒設計,試驗了不同的遊戲定時方法。我們甚至錄了由合適的配音員發出的畫外音指示(以一種非常友善的速度)。儘管這個過程持續了很久,但這是核心概念的首要漸進式改進。實質上游戲在幾個月後就完成了,剩下的時間就都花在了執行過程的改進和確保良好的玩家體驗上了。

同時我們也沒有停止遊戲測試的腳步,無論是在遊戲測試星期三,還是在帶它到在布魯克林舉辦的BQE以及Betas之類的項目的時候,都是一如既往。真人對這款遊戲的持續反饋在遊戲改進的每一步中都至關重要。甚至在GDC展示它的時候,我們沒有人覺得它已經“完成”了,我們展示的很多內容都是我們對反饋鑑定後的做出的改進。

所以我希望這些內容對你有所幫助!這是一個總體概述,但是我想告訴你的是我們如何從一個對臉部追蹤的模糊概念一步步向前做成完整的遊戲體驗的。要想在實體空間設計中運用未經檢驗的技術是相當困難的,但是同樣也是非常值得的。對於有些玩家來說他們可能一生只玩一次你的遊戲,但他們的體驗卻是絕無僅有、趣味無窮的,而且這是傳統控制器遊戲不可能複製的體驗。所以,還等什麼,乾脆現在就開始設計自己的遊戲吧!

本文由遊戲邦編譯,轉載請註明來源,或諮詢遊戲邦,微信zhengjintiao

Design Lessons from ALT.CTRL’s Emotional Fugitive Detector

by Alexander King on 03/17/17 10:47:00 am
Post A Comment

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

Earlier this month, we showed an alternative controller game that I designed with Sam Von Ehren andNoca Wu at GDC’s ALT.CTRL exhibition. ALT.CTRL is a showcase of games with unique controllers. Throughout the exhibition, attendees were curious what our design and development process had been making Emotional Fugitive Detector. I wanted to share some info on our process, as a sort of mini postmortem, in case it’s useful to any future designers of games using novel input methods. So here are the steps to designing a dystopic face-scanning game!

About the Finished Game

Emotional Fugitive Detector is a two-player cooperative game where players work together to outwit a malevolent face-tracking robot. One player is being scanned, and tries to get their partner to pick the emotion they’re making using their face. If they’re too expressive though, the face-tracking API will detect them, and if they’re too subtle, their partner might pick the wrong emotion. It’s been called “unfriendly tech”, “surprisingly affecting”, and, uh… “the tech itself was a little janky… but the concept is so good”! It’s a game about emotional nuance, where a human face is both the controller and screen. But it didn’t necessarily start off that way…

Step 1: Have A Weird Idea & Be Inspired

Sometime early last year, we had read about some open source face-tracking libraries somewhere or another. And, as game designers often do, we thought it might make for a pretty cool game. We originally wanted to make a face fighting game, where players would make different faces at eachother to attack or block. But it never progressed past the preliminary idea phase until we attended last year’s ALT.CTRL show. Seeing so many amazing games that created unique in-person experiences inspired us to actually move beyond the idea phase and try to actually make something.
Step 2: See If Anyone Else Had That Idea Already

So we knew we wanted to make a game using face-tracking. The first thing we did though is something I often do when exploring a new mechanic: see if someone else has done it before! I think this is a neglected part of the process, because games as an artform has such short memory of its own history. But if you take the time to do some research, you’re not just saving yourself the potential embarrassment of rehashing what you thought was a new idea, you’re also able to learn from the successes or failures of your predecessors. (Also, I just love games history.)
In our case we didn’t turn up much. While body tracking had been explored in many Kinect games, we found very few games using facial input (to wiser people, the dearth of examples might have been a red flag). The only ones we could find were various tech demos, or simple games where the face was just a substitution for a button. Games like Eye Jumper or Face Glider have players using their face to make inputs, but in a very direct manner to steer or jump.

Seeing those other games helped clarify to us that we wanted to use the affordances of both face-tracking and the human face as integral parts of the design. Using facial movement as an input is something you could explore better in VR, for instance. So we wanted to detect expressions, and have that be the primary method of play. Our brains are wired to read faces, but it’s not a skill we’re often asked to use in games.

Step 3: Feasibility Test Your Tech

Now it was time to start putting our ideas into practice. Initially we were using OpenCV, by far the best documented open source library for face-tracking. Using a variety of techniques, it makes it easy to detect facial points in photos or videos, similar to what you see in the face-filters on SnapChat. It’s a great library and has an easy to implement plugin for Unity as well. However, it’s primarily a facial detection method only; it provides you with the points on a face and nothing more. While this is all you need if you’re superimposing cat ears onto someone (a noble goal), we wanted to detect expression changes like smiling or frowning. We tried building our own methods to determine these. While they sort ofworked… it turns out facial gesture recognition is a non-trivial problem! But thankfully other people had trodden this path before us. We ended up finding a Java library called clmtrackr, that extends a similar methodology to OpenCV, but had already been trained against a library of faces so it could output confidence intervals of detecting four emotions (sad, happy, angry, and surprised).

I want to emphasize that we are game designers, not researchers, computer scientists, visual recognition experts, or anything like that. Anyone with genuine interest in this area would likely be horrified at our shortcuts and hacks. Rather, we were consciously repurposing a tool to turn it into a game experience.

So it took some doing, but we had our initial technology up and running within a month or two. We could read four emotions from human faces, and use those as inputs into a game system. While initially we weren’t sure the technology would be even feasible, it hadn’t taken long to get something working relatively well. Though ‘relatively’ being an operative word there.

Step 4: Playtest Forever

Sam, Noca and I met at the NYU Game Center, where the three of us are finishing our MFAs. A core principle in the Game Center’s approach to game design is playtesting. Playtesting all the time. Interactive systems are almost impossible to judge a priori, so something that seems great in your head can fall apart when someone who’s unfamiliar with it plays. The program hosts a weekly playtest night called Playtest Thursday where students, faculty and local developers test games and get feedback from the public (said public being predominantly undergrads there for the free pizza).

We went almost every week for several months, testing different gameplay mechanics. This was invaluable for learning the affordances of the technology. It was very persnickety. While it worked great in ideal conditions (ie, when we tested it ourselves), testing with real people revealed several limitations. It was far from 100% accurate, extremely sensitive to how the subject was lit, and if someone moved their head even slightly then it was all over.

As our initial prototypes grew from feasibility tests into game prototypes, we also struggled making a game that was in any way fun. These early prototypes centered on being read by the computer. It would tell you what was it looking for, and the first player to be successfully detected would win. We kept trying to use the design of the game to compensate or hide the limitations of the technology. We’d test narrative framing to explain why the “AI” was so capricious, or use turn-based mechanics to hide the slow recognition speed. It’s very frustrating testing ideas and finding they don’t work! But we kept at it, iterating constantly.

Step 5: Keep Going Till You Find a Great Idea

The watershed moment for us was realizing the detection margin of error could be an asset to the design, rather than a liability. Matt Parker, a game designer and artist with experience in physical installation games, told us about a game he had worked on where players tried to avoid being detected by a Microsoft Kinect. Players had to contort their bodies into weird non-human shapes to win. The genius of that was immediately obvious: if players were trying to avoid being detected by our emotion scanning, rather than attempting to conform to the faulty algorithm, we could build a game around that. Turning a bug into a feature is a great way to stumble on great game design.

The rest fell into place very quickly. A charades-like format, with one player attempting to communicate with another, worked very well in testing. This, coupled with a hidden information mechanic, provided an excellent framework to the game.

A well designed system is critical to a good game experience. Occasionally players at GDC would remark how interesting the underlying technology is, and would speculate about how fun it would be even without the surrounding game. I can tell you empirically that this is totally false!

Step 6: Bottom-up Design

The finished game seems like it was designed in a top-down manner: A dystopic future, robots versus emotions, a scanning aperture and so on. But in fact, all of the physical design was driven purely by the needs of the game design. Why would you try to convey emotions subtly? Because they’re illegal! How can we get players to stop moving their heads and screwing up the tracking? By having them stick their faces into a hole to constrain their movement! How can we ensure consistent lighting? By enclosing the camera in a box!

Every part of the physical and narrative design is serving the game, in a way that feels organic. That’s actually the part of the project I’m most proud of! Although the physical design of the box improved dramatically over different builds, the core essentials were present when it was just a discarded cardboard box!

Step 7: Polish & Keep Playtesting Forever

Everything subsequently was just polish. We put in audio, improved the box design, and experimented with different timing in the gameplay. We even recorded voiceover instructions with a proper voice actor (at a very friendly rate on account of being married to me). While this process lasted a long time, it was primarily incremental improvements to the core ideas. The game was essentially done after only a few months, and the remaining time was just improving implementation and ensuring a good player experience.

We also never stopped playtesting during this time, whether at Playtest Thursday or by taking it to local events like BQEs and Betas at the Brooklyn Brewery. Constant feedback from real people is critical, at every step of a game’s development. Even showing it at GDC, none of us think of the game as ‘done’, and there are many improvements we identified showing it there.

So I hope you find that useful! This is an overview, but I wanted to give an idea of how we went from a vague idea about face-tracking to a complete game experience. Designing for physical spaces using unproven technology is incredibly difficult, but also very rewarding. For some players they might only play your game once in their life, and the experience they have can be unique and interesting, and it can be something impossible to replicate with conventional controllers. So why not start designing your own!(source:gamasutra.com