General Games
We show that Optimistic Hedge -- a common variant of multiplicative-weights-updates with recency bias -- attains $\rm poly(\log T)$ regret in multi-player general-sum games. In particular, when every player of the game uses Optimistic Hedge to iteratively update her action in response to the history of play so far, then after $T$ rounds of interaction, each player experiences total regret that is $\rm poly(\log T)$. Our bound improves, exponentially, the $O(T^1/2)$ regret attainable by standard no-regret learners in games, the $O(T^1/4)$ regret attainable by no-regret learners with recency bias (Syrgkanis et al., NeurIPS 2015), and the $O(T^1/6)$ bound that was recently shown for Optimistic Hedge in the special case of two-player games (Chen & Peng, NeurIPS 2020). A direct corollary of our bound is that Optimistic Hedge converges to coarse correlated equilibrium in general games at a rate of $\tildeO(1/T)$.
General Games
A new series of Quartermaster General games debuts in 2023 with the release of Quartermaster General: East Front. The fast-paced, card-driven, game system by Ian Brody shifts its focus from the grand strategic and multiplayer approach of Quartermaster General WW2 Second Edition to two-player and operational-level games. The first title to release, Quartermaster General: East Front, depicts the deadly struggle between the Soviet Union and Germany.
Quartermaster General: East Front is the first completely new game in the system to be published by Ares Games, following the release of the new editions of WW2 Quartermaster General, its expansions Total War and Prelude, and Quartermaster General: 1914, originally published by Griggling Games. The new series will include two more games, depicting the conflict on the Mediterranean and the Western fronts during World War 2.
General game playing (GGP) is the design of artificial intelligence programs to be able to play more than one game successfully.[1][2][3] For many games like chess, computers are programmed to play these games using a specially designed algorithm, which cannot be transferred to another context. For instance, a chess-playing computer program cannot play checkers. General game playing is considered as a necessary milestone on the way to artificial general intelligence.[4]
General video game playing (GVGP) is the concept of GGP adjusted to the purpose of playing video games. For video games, game rules have to be either learnt over multiple iterations by artificial players like TD-Gammon,[5] or are predefined manually in a domain-specific language and sent in advance to artificial players[6][7] like in traditional GGP. Starting in 2013, significant progress was made following the deep reinforcement learning approach, including the development of programs that can learn to play Atari 2600 games[8][5][9][10][11] as well as a program that can learn to play Nintendo Entertainment System games.[12][13][14]
The first commercial usage of general game playing technology was Zillions of Games in 1998. General game playing was also proposed for trading agents in supply chain management thereunder price negotiation in online auctions from 2003 on.[15][16][17][18]
In 1992, Barney Pell defined the concept of Meta-Game Playing, and developed the "MetaGame" system. This was the first program to automatically generate game rules of chess-like games, and one of the earliest programs to use automated game generation. Pell then developed the system Metagamer.[19] This system was able to play a number of chess-like games, given game rules definition in a special language called Game Description Language (GDL), without any human interaction once the games were generated.[20]
General Game Playing is a project of the Stanford Logic Group of Stanford University, California, which aims to create a platform for general game playing. It is the most well-known effort at standardizing GGP AI, and generally seen as the standard for GGP systems. The games are defined by sets of rules represented in the Game Description Language. In order to play the games, players interact with a game hosting server[25][26] that monitors moves for legality and keeps players informed of state changes.
Since 2005, there have been annual General Game Playing competitions at the AAAI Conference. The competition judges competitor AI's abilities to play a variety of different games, by recording their performance on each individual game. In the first stage of the competition, entrants are judged on their ability to perform legal moves, gain the upper hand, and complete games faster. In the following runoff round, the AIs face off against each other in increasingly complex games. The AI that wins the most games at this stage wins the competition, and until 2013 its creator used to win a $10,000 prize.[19] So far, the following programs were victorious:[27]
The General Video Game AI Competition (GVGAI) has been running since 2014. In this competition, two-dimensional video games similar to (and sometimes based on) 1980s-era arcade and console games are used instead of the board games used in the GGP competition. It has offered a way for researchers and practitioners to test and compare their best general video game playing algorithms. The competition has an associated software framework including a large number of games written in the Video Game Description Language (VGDL), which should not be confused with GDL and is a coding language using simple semantics and commands that can easily be parsed. One example for VGDL is PyVGDL developed in 2013.[6][24] The games used in GVGP are, for now, often 2-dimensional arcade games, as they are the simplest and easiest to quantify.[41] To simplify the process of creating an AI that can interpret video games, games for this purpose are written in VGDL manually. VGDL can be used to describe a game specifically for procedural generation of levels, using Answer Set Programming (ASP) and an Evolutionary Algorithm (EA). GVGP can then be used to test the validity of procedural levels, as well as the difficulty or quality of levels based on how an agent performed.[42]
Since GGP AI must be designed to play multiple games, its design cannot rely on algorithms created specifically for certain games. Instead, the AI must be designed using algorithms whose methods can be applied to a wide range of games. The AI must also be an ongoing process, that can adapt to its current state rather than the output of previous states. For this reason, open loop techniques are often most effective.[43]
A popular method for developing GGP AI is the Monte Carlo tree search (MCTS) algorithm.[44] Often used together with the UCT method (Upper Confidence Bound applied to Trees), variations of MCTS have been proposed to better play certain games, as well as to make it compatible with video game playing.[45][46][47] Another variation of tree-search algorithms used is the Directed Breadth-first Search (DBS),[48] in which a child node to the current state is created for each available action, and visits each child ordered by highest average reward, until either the game ends or runs out of time.[49] In each tree-search method, the AI simulates potential actions and ranks each based on the average highest reward of each path, in terms of points earned.[44][49]
In order to interact with games, algorithms must operate under the assumption that games all share common characteristics. In the book Half-Real: Video Games Between Real Worlds and Fictional Worlds, Jesper Juul gives the following definition of games: Games are based on rules, they have variable outcomes, different outcomes give different values, player effort influences outcomes, the player is attached to the outcomes, and the game has negotiable consequences.[50] Using these assumptions, game playing AI can be created by quantifying the player input, the game outcomes, and how the various rules apply, and using algorithms to compute the most favorable path.[41]
Organizations are required, pursuant to N.D.A.C. 99-01.3-02-03(3) to have a policy manual on its conduct and play of games in the gaming area at a site for review by any person. The manual must include policies for resolving a question, dispute, or violation of the gaming law or rules. The manual cannot include internal controls.
The Panzer General games are turn-based games set in the Second World War. The player can control the armies of the 3rd Reich, the Allies and the Soviet Union. New units can be bought and repaired with prestige points. All units, beside infantry, have to be supplied with fuel and ammunition.
Whether he was developing a civilization from hunter-gatherers to a medieval empire in Age of Empires or combating one-on-one in Dynasty Warriors, Steve Datz was influenced by video games from an early age.
Datz is seeking funding to help get the business off the ground, to provide support at the chance to create a multimillion-dollar business in the area, he said. Internship or donation inquiries may be sent to info@flyinggeneralgames.com.
For six years, Datz worked in various jobs, including video editor and cook, before starting at UW-Milwaukee in information science technology in 2017. He then switched to computer science. But the focus of the program left no entry point into games after graduation, he said.
So, in 2019, he transferred to UW-Stout and the computer science game design concentration. The transfer office, along with Program Director Diane Christie, helped him receive credit for prior work to fast track him out of general education requirements and into game design courses.
General game players are systems able to play strategy games based solely on formal game descriptions supplied at "runtime". (In other words, they don't know the rules until the game starts.) Unlike specialized game players, such as Deep Blue, general game players cannot rely on algorithms designed in advance for specific games; they must discover such algorithms themselves. General game playing expertise depends on intelligence on the part of the game player rather than intelligence of the programmer of the game player. 041b061a72