Friday, January 13, 2006

The Future of the Game Industry

People are wondering how the industry will sustain itself with the rising costs of next gen development. The BIG G knows that most development costs aren't going to rise. I know what you are thinking: “Whaaaaaaaaaaaaaaaa?” I'll tell you why: because they don't have to. Rather than make games that utilize the awesome power of the next gen platforms, just don't! “But The BIG G, WE won't buy them!” Maybe YOU won't, but people will. All you need to sell a game is a good license. All you have to do is attach your game to a popular TV show or movie, crap in a box, ship it, and you're guaranteed sales. More content? Who needs it! Better graphics? Just get something good enough so the pictures look good enough for the back of the box. There are good licensed games out there, but some don't sell because the license isn't popular enough, isn't gamey enough, or just uninteresting to the mass market consumer. There are many bad licensed games where the game is absolute crap, but it has a great license so it sells like hotcakes. Rather than spend the extra money, publishers should (and will) take the least amount of risk and make a bad licensed game for cheap. That way each game is less expensive to make, so if it happens to flop because no one cares about your license, then its not as much of a loss for the company. With that money you could have spent on quality you can diversify your licensed game roster to amortize the costs of the ones that flop. Also, the cost of a license can be the same as other game companies spend on marketing, so if you make a licensed game, you don't need nearly as much money spent on marketing. Licensed games are the future of the industry. That is, until the quality gets so low that people stop gaming all together, so another video game crash occurs like back in the Atari days. Man, I want to play E.T. for the 2600 right now! (I can't believe I actually paid $4 for that game... What a rip off!)


Anonymous jubal_harshaw said...

Haven't you stolen that strategy lock, stock and barrel from Hollywood?

Besides, to me it looks like this has been happening for some time. I can't remember the last game I was excited about. Certainly none in the last 5 years.

I find almost all current games to be tedious, predictable and insufficiently complex. These sorts of games were fine when I was ten years old, but I'm an adult now. I shouldn't have to liven up my game experience by playing 'predict the plot.'

As far as consoles go, I am less concerned with graphical niceties and more concerned with raw multi-purpose processing power and the ability to manipulate huge datasets.

The sort of game I want to see (and that I've been working on for the past few years, off and on) is a full social simulation. Each NPC is a fully-fledged unique person with a full range of motivations, needs, memories, beliefs and interact with each other, passing on information or misinformation. The NPCs, like in the real world, are the vector by which everything happens - they run businesses, go to work, steal, murder, lie, are influencd by things they see, do pretty much everything that real people do. Your goal in the game is to rise to some sort of political power, through democratic means, revolution or business by the simple process of manipulating people and events, maintaining or growing your influence. A first or third person game with a strong strategic element, but based on personal interaction with the NPCs. NO SCRIPTED EVENTS AT ALL. Every important event in the game must occur through natural NPC interaction and reaction.

Although it sounds familiar to a lot of other games that promise a 'living world,' I have found them to be an acute disappointment - not nearly as complex and manipulatable as I would like. I want a cause-and-effect simulator that bears some resemblance to the real world, damn it.

But there's the crux. This game interests me because I have lived my life around three disciplines: technology, sociology and system analysis. Thus, I believe I can describe, to a reasonable degree, the simple equations of human social interaction (within certain homogenised societies like our own). It's really an academic work, designed to test out my theories.

Which is why it will probably never see the light of day. The second reason is that the backend processing requires, so far, seven servers (each x86 2GHz+).

This sort of game probably wouldn't appeal to anyone but me. Complex world, almost limitless interactivity and a simple premise that forces you into difficult personal decisions.

*Sigh.* Maybe in another five years it'll be finished. It's my very own Duke Nukem Forever. :)

7:10 AM  
Blogger The BIG G said...

The real problem with your ideal game is the massive amount of programming for the NPCs that would be required. I think you would be satisfied if you could have a game that featured one or two NPCs that were believable people. The problem is that there is no good system for handling that from a code standpoint other than code in pretty much everything you want the NPC to do: every emotion, every thought, and how they all interconnect. A University made a game sort of like this called Facade. It is an interesting concept, and when I played it at GDC two years ago, it need a long way to go. I haven't played the released version, but I have a feeling it's no where near where you or I would hope.

-The BIG G

12:23 PM  
Anonymous jubal_harshaw said...

The BIG G, that's a common misconception that I know to be untrue. Stop thinking of the problem as a programmer and think of it like a mathematician. Understand that people are pretty much identical and that their decisions are based on past experiences, which is the only thing that makes them different. There are very few experiences that are 'unique' - therefore each human is a recombination of existing experiences, each experience modified by those before.

The trick is NOT to program each NPC, but to create a whole range of behaviours that are called in response to certain stimuli, the exact behaviour called depends on a whole range of data that makes up its 'life.' From stimuli, it learns which reactions and actions lead to which outcomes. In a way, it's a form of AI, which is why I call the system 'AWARE' (Artificial World Analysis and Reaction Engine). Bad pun, I know, but I couldn't resist.

This is the reason that it's so processor-intensive. Behaviours have to be (and are) emergent, not prgrammed. Though not a particularly good programmer (as with everything, I am self-taught) I appreciate an elegant, sustainable solution. I'm not sure why you feel such a thing is impossible, I can assure you it's not. There's no reason to program *everything* when you can simply lay down the rules for reason and interaction that must be followed without the need to specify anything about the world, as long as each element of the world contains a lot of metadata for cross-referencing with NPC experiences.

I'm sure I'm not explaining this very well, but this is closer to 'proper' AI than game AI. To get the NPCs to act and behave normally requires a stimulus-response engine, not predictable patterns that can be re-used. I've never understood the convoluted nature of game AI, when this seems so much simpler to me.

The point is not to make the NPCs that believable, but to ensure that their interactions mirror those that would happen in the real world, so their reasoning is simulated only to that end.

I still haven't solved the problem of how one would communicate with them - language parsing and interpretation is not my favourite area. Also, I have no real front-end at the moment, but I can at least check and make sure that the NPCs have been created, they are unique, and they can interact and, in doing so, modify their 'memory' - leading to different decisions next time. It's all so horribly slow, though.

Think of it more like 'LIFE' the next generation.

I suppose the reason I'm doing this is because I'm self-taught, and never had anyone to tell me it was impossible. Then again, I have a pretty good IQ (not that it means anything) and 25 years of quantifying people and their actions and their stated reasons for those actions. Perhaps I just have a different perspective.

I can't help but feel that this whole project sounds a bit megalomaniacal, and the way I see the NPCs as a means to an end is probably a tad psychopathic. Hmmm...

Will check out Facade in a bit, cheers.

5:07 AM  
Anonymous jubal_harshaw said...

Re-reading your last comment makes me think you might not quite understand what I'm trying to do.

You think of emotions as things to code, whereas I consider emotions to be a weighting system for making decisions. My way means much less code, far more efficient. Everything is a means to an end for a decision THEN action. That's how I see people in real life, oddly enough.

How you decide if an NPC is believable depends on their actions, a result of their decisions. As long as the decision-action process is analagous to real life or produces analagous results, that's believable. 'Thoughts' don't exist as far as the NPCs are concerned - an event happens that requires a reaction, the reaction is based on past experiences.

How do you tell someone is scared - you cannot touch, taste or feel their emotion, only the physical manifestation of it. This is a common problem with people - insisting that an intangible really exists, by looking at the resulting action. I cannot say for sure if the person really feels scared, or not, or that the state of 'fear' exists, but I can see the reaction. That is all that is important. What other actions and decisions led to this point, not what goes on in someone's head (which, I have learned, is not usually much).

And, now, we get into complicated questions on the meaning of sentience, emotion and thought. I'll stop there. Suffice to say I see things... differently.

5:28 AM  
Blogger The BIG G said...

I think if you are going to make a system in which feelings and reasonable behaviors are going to "emerge", then that system is even more complex than coding in all the behavior and emotions you want. If you're really interested in this, I'd recommend reading "Unified Theories of Cognition" by Allen Newell. The human mind is relatively complex in how it evaluates and makes decisions, and programming a system that emulates it is pretty complex. There is a programming language called SOAR, which is supposed to model human cognition. The only thing is, the last I heard, the learning algorithm portion of it (called "chunking") didn't really learn new things as much as store the results of previous decisions. Anyway, the real problem with the system you are proposing is you have to give it a lifetime worth of experiences, get the system to form opinions, thoughts or feelings based on each event that happens, and store those in a meaningful way. That's a monolithic task, which isn't feasible by researchers with supercomputers.

Really, if we want to create a believable human like AI, using the current technology, we'd have to create a model of the physical brain, then raise it like an infant. It would be a massive undertaking, but assuming we understand all the biology, we can model a brain with a massive program (which would still be complex because there are different regions of the brain and all that). There isn't another algorithm that can learn and be trained well... Neural nets aren't any good at all since they rarely learn the thing you want them to. It's the same with most learning algorithms. They learn something, just not at all what you want them to. For example, if we had a simple system like you are proposing, it could learn things, but it wouldn't learn things the same way people do because there's nothing which forces it to. It has much less data, so the cognitive leap connecting cause and effect, reason and result is going to be different. Feel free to prove me wrong by writing such a system. I'd be very impressed. But even all the academics are dreaming of a system like you described, but just can't program it to work. I don't understand how you can say "you KNOW this to be untrue" since there is not a single implemented system that supports your claim. If you know how it works, then PLEASE do all of us a favor and implement it because I'd love to use it (regardless of performance problems), but I just don't know how to code it.

-The BIG G

4:06 PM  
Anonymous jubal_harshaw said...

Well, I've returned. A year later or so, and I doubt you'll even see this message as it's far in the archives but...

From your response, you're still thinking of the complexity of an actual human being. A brain is very complex, but the actions of a person given a certain situation are very much finite and predictable given enough information about their previous experiences.

And, of course, in a game-world everything is that much more structured and, in fact, things are designed specifically to be interactable and identifiable by means of a set of data specifically parsable by any given NPC's emulated decision process.

Perhaps I have discovered the secret of human decision making by accident - but it's more likely that I'm still doing a very poor job of explaining the system. Firstly, in a finished game, the world would be seeded with a variety of 'pre-grown' AIs that already had the desired decision skills required (of course, their offspring would have to learn everything from scratch). Secondly, each agent does not have its own mind - just a pseudo-complex mathematical keying algorithm that links into 'the shared experience library' which is hosted by the world AI.

I'm not trying to make an amazing decision engine, just a set of rules and procedures by which an agent can make complex decisions in a rule-based world. The important part is that these are simple, rule-based worlds (game worlds) and thus, the complexity that you fear is not applicable when most gameworlds are simple enough to only allow a few million different decisions, and everything is designed with the AI in mind. The main problem of all this is not the AI, per-se, but the design of the world itself.

I still suck at explaining it. I fear that most people see themselves and others as innately irrational, unknowable, infinitely complex. That may be true inside one's head, but if you just judge people by their actions, then there's very little variance - and that is largely caused by the different outlets and opportunities afforded people, not by their innate difference.
A sociological perspective, rather than a psychological one makes this task easier.

1:55 PM  
Blogger The BIG G said...

I get e-mail updates whenever I get a comment, so I can respond to this pretty quickly. Although the way your phrased your AI algorithm seems simple in theory, but when one goes to create it in code, it becomes quite difficult. For example, let's look at 'the shared experience library'. Now, I'm no noob when it comes to programming AI. I studied it in school, and I am a published author on the subject (by my real name, not my mischievous pseudonym). I don't even know how to begin to create a shared experience library. Firstly, you have to figure out how to code what an experience is. That is a difficult problem in and of itself. AI works with numbers and not abstract concepts. Creating an AI system that can generalize behavior from specific situations is still a research project. Secondly, the set of rule and procedures have to be coded by the game programmers unless you use learning algorithms, but those don't work out too hot in games. (Black and White) If you are coding a set of rules and procedures, then I don't see what the difference is between what you are proposing and what game AI programmers currently do. The problem is it takes a lot of rules and procedures to define reasonably human behaviors (even from a sociological perspective) since computers can only do EXACTLY what you tell them to, not what you want or intend them to do.

10:28 PM  
Anonymous jubal_harshaw said...

It's true that I've largely shifted these difficult issues to the gameworld, and yes, it's proving to be more difficult than I first thought. And, yes - you're right that the whole thing appears to hinge on what constitutes an 'experience.' But, the key thing here is when you said: "The problem is it takes a lot of rules and procedures to define reasonably human behaviors (even from a sociological perspective) since computers can only do EXACTLY what you tell them to, not what you want or intend them to do."

Hit the nail on the head again - which is why I started this thing in the first place. It's my goal to try and reduce this set of rules to a managable number (say, 10) by creating a uniform way of organising the gameworld data, so that the same rule can trigger widely different responses based on the context. It's a classification problem, if you will - or a relational database nightmare. You strip down decision making to the absolute core and shove the specifics into the database, based on the concept or item under consideration, but still in a uniform structure, so that the solver will be the same based on, say, a generic 'looking for..' ruleset. So, no - I suppose it's not that different from what some game AI is like at the moment, it's a matter of making everything in the database a perfectly modular, uniform lego brick - and then what you do with it.

I suppose that is, rather, the holy grail of some AI - how do you describe everything (objects and intangibles) in exactly the same manner, or create a small number of classifications under which everything can be shoved so that they can be put together in some meaningful way - just like a language? It's fun experimenting with ways of doing this. The amount of rules required depends on how successful this stupid idea is, how easy it is to cross-reference aspects from a 'remembered' (a key that combines elements) situation with a current predicament, say, hunger.

6:44 AM  
Blogger The BIG G said...

Hopefully you can come up with a world representation that makes AI easier to write. Take for example the game of Go. There is a $1.6 million reward for a computer program to defeat a strang ameteur player in Go. Go is a much simpler world than humans interact in. It's a 19 x 19 grid and you take turns placing colored stones down. Although Deep Blue can beat the world's chess champion, no one can beat a strong ameteur player in go. The reason for this is that there are so many more options, so computers cannot currently brute force it. (Eventually, we'll have the processing power. Maybe we could now if someone were willing to spend a ton of money on a massively parallel super computer) If one were to create a computer program to win, the computer would have to be able to recognize patterns and choose the appropriate strategy based on its world view (aka the board). Even this simple task is too difficult for AI programmers, and it is one of the simplest worlds in gaming. Yes, the problem of creating a believable human is a different one, but I'm merely trying to illustrate that breaking down a world into something that you can apply simple rules to is the toughest thing in AI.

10:46 PM  
Anonymous jubal_harshaw said...

Dammit, dammit, dammit!
Now you've got me thinking about the Go problem (your link goes nowhere, btw). There are some similarities here, admittedly - mainly the task of making a system that can develop a strategy and adapt it to changing conditions. (Though I still maintain that creating a believable NPC requires fewer options, but more variation in the different specific objects/actions/etc.)

A purely reactive AI would be simple, but the best it could do is foil your attempts to win.

I don't know much about Go (probably why I've not noticed this challenge) so I've been reading up on it a bit.
My thoughts are, so far:

The counters arent as important as the zones. The over-arching strategy is one of zone management.

The game is a metaphor for troop movements and conquered land / enemies.

To that end, perhaps it would be an idea to look at similar human systems that have had to develop strategies and sets of rules to deal with similar situations - such as the military.

Some degree of brute-force is almost inevitable if you want to be able to 'pick-up-and-play' rather than have the system learn every strategy through gaming. Therefore, the problem lies in what, when and how you decide when and how you attempt to predict the opponent's strategy.

A developed strategy, or pre-game decided adaptive strategy? Both? How do you figure out that your strategy is failing?

Cause and effect are wider concepts in this sort of game, how do you reconcile a move with it's contribution to success or failure many moves hence?

I manage to sidestep a lot of these issues with my system, I get to create a gameworld with very linear relationships between very specific and different things, so that the content almost defines the logic, in a way.

I'm not going to be able to sleep tonight. I have an obsessive need to solve problems, I thought I'd kept that reasonably under control with my stupidly ambitious (and probably doomed) project. Now I've got to rework my ideas for a more nebulous application - which may be benificial in the long-term if it doesn't drive me insane.

Thank you so very much, Big G.

7:07 AM  
Blogger The BIG G said...

Here's the link. Sorry about that. More later.


11:48 AM  
Blogger The BIG G said...

The variation in actions makes the problem more complex. Just getting an NPC to behave reasonably in a vacuum is one thing, but when the player has freedom to do whatever his/her heart desires, it makes things complicated. Say the player starts getting in the NPC's way. The NPC should get frustrated. Maybe the NPC should push the player out of the way. Maybe the player is trying to jump on the NPC's head. The NPC should react and duck. Maybe he/she should try to run away. Should he/she run towards a police officer or continue on his/her set path to the goal? Do the players have desires for food and recreation? How do they balance that? What do they do for recreation? All of this has to be programmed in, and that in and of itself is a difficult problem. A great example is Fable. Peter Molyneux promised all these things, but the AI for Fable seemed just like any other game, only possibly more annoying because the called my character "Chicken Chaser" all the time. I think they labeled every character "Chicken Chaser". I don't even know what that means or how the AI engine got that impression. Probably a bug. It seems Lionhead is trying to do what you are describing, but the problems end up a lot harder than you might expect.

I've thought long and hard about Go AI, and I think the main problem is generalization. I think this extends to creating believable human characters also (maybe to a lesser extent). So, when we look at a Go board, we see pieces. Good Go players can recognize territories and see where the areas of interest are. Unfortunately, computer programs only have a 19x19 grid with 3 states for each cell: white, black or empty. There is no way that I know of to go from the array to a pattern. Sure, there are learning algorithms like neural networks, but in my experience, they don't work all that well. They learn something, but there's no guarantee it will learn the pattern you really want it to recognize. This problem also is an issue in computer vision. What seems so obvious to us is really a bunch of 1s and 0s to a computer.

Sorry for driving you insane! That wasn't the intention... I'm just trying to provoke some thought. Maybe you'll be the first to come up with innovative new game AI. I wish you the best with your efforts, and it sounds like you are well on your way to make an impact on the AI community.


3:44 PM  

Post a Comment

<< Home