H. Chase Stevens Logo

Most Common Tags

  1. programming
  2. python
  3. code
  4. philosophy
  5. evolution
  6. game design
  7. probability
  8. video games
  9. genetic algorithms
  10. government

Archives

Loading

Posts with tag "programming":

Evolving Game AI – Part 2 (Design and Genome)

posted on: Sunday, January 6, 2013 (12:58 pm) by Chase Stevens

This is part two of a three-part series. For the first installment, see Part One (Overview).

General Design

As mentioned in this series' previous post, the aim of evolving AIs for StratLoc was not to generate systems capable of manipulating individual units but, instead, to have them issue overall goals or directives for their turns in response to environmental data fed to them from the game. These directives, in turn, would be implemented by hardcoded finite state machines. An immediately obvious failure of this design was that AIs could easily issue directives that were flat-out impossible, such as attempting to attack an enemy player without possessing any units. There exist two naďve solutions to this issue, neither of which work particularly well. The first would be to ignore unattainable goals or unimplementable directives. This has the distinct disadvantage of allowing AIs to accomplish nothing during a turn, simply because they have not evolved to have goals compatible with the current state of the game. Moreover, because of the extremely large search space we were already tasking our genetic algorithm with exploring (thanks to the number of environmental variables we were feeding it, as will be discussed below), even AIs that for the most part had evolved excellent strategies could find themselves left dead in the water when presented with a novel situation. The other solution would be to simply feed in more data to the AI, so that it could adapt to any situation, and could evolve in a manner such that, if it did not have military units, it would not attempt to attack anyone. However, as mentioned, the search space we were already operating in was quite large – it would have taken several years to fully explore it with our available hardware, according to my calculations. Each additional bit of information exposed to the genetic algorithm would have increased the search space by a factor of two, as well as doubling the size of the genome (which I had already calculated as being at least a sizable 6.4 megabytes).

My solution to this was to imbue the AI's directives with decompositionality. This is to say, for any action the AI might attempt to make that had circumstances under which it would fail, the state machine charged with executing that action would, under those circumstances, defer to an alternate state machine in an attempt to remedy the circumstances. As an example, supposing in some situation the AI attempted to build a tank but had insufficient resources to do so. The task of making a tank would then decompose into the task of seizing additional resources. Should this be impossible (due, perhaps, to a lack of workers with which to build mines), the task would decompose into building workers, and so on. Even when an entire chain of actions led to a dead end my design attempted to let the AI do something by (in most circumstances) deferring to a "default action" specified by the genome. Only in extraordinary cases would the end result of an AI directive lead to nothing being done.

Genome layout

This section will give a general layout of the genome. If you are interested in specifics, I recommend taking a look at the official structure specification I wrote up before StratLoc was implemented. Overall, the genome can be broken into two sections. The first contains a series of "threshold" values and takes up 26 bits in total. The threshold values encode variables referenced by the action-implementing finite state machines, such as "combat win chance tolerance", variables that are used in the generation of environmental inputs, such as the "[unit] queue length tolerance", and the aforementioned "default" AI action. The second can be imagined as a 2D array, with the first index being a 21-bit value representing environmental conditions, and the second index pointing to any of five 5-bit AI action encodings, all of which are intended to be executed during a given turn. This allows for 32 possible actions. While most actions are straightforward orders to create units, attack enemy players, build structures or fortify locations, seven actions are reserved for manipulating the internal state of the AI. This allows it to alter its combat win chance tolerance (which will result in different combat behavior), alter its turns left threshold (which is reflected in the environmental conditions), or clear the AI's unit queue. Specifics as to the execution of AI actions can be found here, but be forewarned, these underwent some changes in the final implementation. Environmental conditions encode what occurred last round (such as being attacked or building cities), the ranking of the AI (in several regards), which military unit type was most numerous, and whether any of the AI's thresholds had been triggered. While it might seem like these sparse details would provide insufficient information to the AI, any increase in the size of the environmental conditions would have (quite literally) exponentially increased the difficulty of the genetic algorithm's task. Overall, however, the AIs seemed to do quite well with the details they were provided and, regardless, the intelligent decomposition of AI actions ensured that even totally randomly generated AIs would have been moderately successful at playing StratLoc.

In my next post, I'll go over the implementation details of the genetic algorithm itself and detail some of the problems we ran into with the AI.

Tags: evolution, game design, genetic algorithms, programming, stratloc, video games

Evolving Game AI - Part 1 (Overview)

posted on: Friday, December 28, 2012 (3:05 am) by Chase Stevens

In this series of posts, I’ll be discussing my experience designing and helping to implement AI opponents for the turn-based, open-source strategy game StratLoc. The AIs were created using a genetic algorithm which was used to crudely simulate natural selection and evolution, allowing our AIs (over the generations) to become competent players of the game. Although the use of genetic algorithms in video games is not uncommon, the technique is often applied when something needs fine-tuning, such as the numeric value of some "aggressiveness" variable, or for the optimization of well-defined subtasks with limited scope, such as situational unit movement patterns or the micromanagement of a base. Our AI, on the other hand, was tasked with providing turn-by-turn, high-level strategies and actions in response to the overall climate of the game, the execution of which was then handled by a hardcoded system.

A game of StratLoc
StratLoc

This first introductory post will give an overview of Stratloc as a game, so as to provide some context. StratLoc is a turn-based strategy game played on a hexagonal board, a la Civilization (sans a great deal of complexity). The objective of the game is to eliminate your opponents by capturing their cities. The game is played on a randomly generated map featuring plains, hills, mountains and water tiles (both of which are impassable), metal tiles, and four starting cities (one for each player). From cities, players can build military units and workers; players must also control at least one city to avoid losing the game. Cities are permanent fixtures which can be captured by opposing players.

The game has a single overt resource, "metal", which can be harvested from metal tiles by building mines. Mines provide a steady stream of metal per turn inversely proportional to the distance of the mine to its owner’s nearest city. Mines, once built, cannot be destroyed, but can switch ownership if captured by another player. The player must also manage "power", which acts as a supply cost for units. Power is provided in small quantities by cities and in bulk by power stations. Power stations are not capturable, but may be destroyed by enemies. A final building, the fortification, can be built in order to provide a defensive bonus and gain vision around a tile. Fortifications are permanent but capturable. All buildings (including cities) must be built using a "worker" unit, which is destroyed in the process.

There are three military units in the game: anti-air, tanks, and air. Anti-air units have a standard range and low base damage, but are cheap and gain a combat advantage against air units. Tanks are slightly more expensive, with a standard base damage but high range. Tanks also suffer a penalty against air. Air units have a high metal cost and high power usage, with a low HP. However, they make up for this by providing huge line of sight and high mobility. In this way, combat strategy is more or less a game of rock-paper-scissors, with air countering tanks, tanks countering anti-air, and anti-air (obviously) countering air. Units can gain a defensive bonus by being located on a city, fortification, or hill, or by accumulating a fortification bonus by remaining in the same spot for several turns.

That concludes the general overview of the game and its rules. In my next post, I’ll discuss the design of the AI from a high-level perspective, then explore the details of the AI genome.

Tags: evolution, game design, genetic algorithms, programming, stratloc, video games

Reconstructing Images Using Damaged Copies (in Python)

posted on: Thursday, September 13, 2012 (4:01 pm) by Chase Stevens

I've recently been working on implementing various methods of image reconstruction in python. The idea is to, given several imperfect (that is to say, noisy, incomplete, photoshopped, or otherwise damaged) copies of some original image, attempt to arrive at something close to (if not precisely) the original image by combining said copies. Through these means, an approximation of the original image can be generated should the original be lost, itself damaged, irretrievable, or otherwise unavailable or useless. In writing the various functions for doing this, I implemented techniques used in signal averaging and variants thereof. I also implemented a “modal image" function which, for each pixel, uses the “modal" RGB values across all image copies or, failing that, performs a simple mean of values.

Examples and Analysis

Original Christian Bale photograph

For the following examples, I modified the above image of actor Christian Bale. Ironically enough, in testing for this article, I overwrote the original image and had to employ the use of Google's reverse image-search in order to find it.

Damaged images
Damaged Christian Bale #1 Damaged Christian Bale #2 Damaged Christian Bale #3
Delta: 105.122025 Delta: 88.026425 Delta: 68.5942
Function results (listed with difference from original image as given by get_delta, lower is better):
<function average_images_add at 0x00000000030E8F28> : 155.35693875
<function average_images_sub at 0x00000000030E95F8> : 72.4844575
<function average_images at 0x0000000002EF0208> : 43.92254625
<function average_noisefilter_all_sub at 0x0000000002EF0278> : 51.1805645833
<function average_noisefilter_all_delta at 0x0000000002EF02E8> : 36.9071316667
<function modal_image at 0x0000000002EF0358> : 42.53322
Christian Bale Reconstructed Using modal_image Christian Bale Reconstructed Using average_noisefilter_all_delta
Function: modal_image
Delta: 42.53322
Function: average_noisefilter_all_delta
Delta: 36.9071316667
As is readily visible, in this example the naďve “voting”-type approach used by modal_image is deceived in any case where forms of damage across multiple images “agree” with each other. Given a larger number of copies or a less artificial form of damage, this would likely cease to be an issue; in theory, modal_image could even go so far as to reconstruct the original image perfectly. average_noisefilter_all_delta produced the best results on average, although, to its detriment, its output relies on the order of the list of image copies passed to it. In addition, while it manages to be closer to the original image, the subtraction of image differences it employs creates a slightly “jarring” visual effect (as seen above). The inherent issue in reconstructing damaged images is one of knowledge. To humans, it seems obvious that the original image of Christian Bale didn't have streaks of white across it. However, this is a mere assumption based on our vast pool of experience in viewing photographs. Who's to say that the original photo didn't have white scribbles on it? The computer is hampered, from our perspective, by its inability to make these assumptions when reconstructing the original image, so although it produces results better than a mere superposition of the copies, they rarely are better than what could be accomplished by human hands.

Noisy images
Noisy Christian Bale #1 Noisy Christian Bale #2 Noisy Christian Bale #3
Delta: 92.715475 Delta: 93.352175 Delta: 93.08885
Function results (listed with difference from original image as given by get_delta, lower is better):
<function average_images_add at 0x00000000030E8F28> : 103.01672125
<function average_images_sub at 0x00000000030E95F8> : 81.929801875
<function average_images at 0x0000000002EF0208> : 82.971985
<function average_noisefilter_all_sub at 0x0000000002EF0278> : 69.6356495833
<function average_noisefilter_all_delta at 0x0000000002EF02E8> : 75.23682
<function modal_image at 0x0000000002EF0358> : 65.2880416667
Christian Bale De-Noised Using modal_image Christian Bale De-Noised Using average_noisefilter_all_sub
Function: modal_image
Delta: 65.2880416667
Function: average_noisefilter_all_sub
Delta: 69.6356495833
In this case, modal_image produced the best results, although its output is still quite noisy. Again, having more copies would significantly improve the end product. The output also appears to be slightly brighter than would be expected. average_noisefilter_all_sub comes in at a close second, although (to the human eye) its results appear to be quite a bit noisier than modal_image.

I tried to use these image reconstruction methods on compressed images with jpeg artifacts as well, although the results were much poorer. Output deltas ended up being worse than the least compressed copy of a given set, although better than an average copy. modal_image and average_images_add seemed to perform best. The overall problem was again one of knowledge: could less compressed images be prioritized or weighted given their closeness to the source, results would likely be much better. However, the computer has no means by which to determine what is closest to the original, and thus fails to leverage this. Some kind of programmatic detection of jpeg artifacts could be of great help in improving this situation.

As a final note before my code, I just recently launched Broverbs. Please do check it out if you've got the time!

Download
from __future__ import division
from PIL import Image as PILImage
from Tkinter import *
from tkFileDialog import askopenfilename as openfile
from itertools import product
from random import shuffle

class Image():
    def __init__(self,filename=None,dimensions=None):
        if filename!=None:
            self.pic = PILImage.open(filename)
            self.imgData = self.pic.load()
            self.size = self.pic.size
        else:
            if dimensions!=None:
                self.size = dimensions
            else:
                self.size = [0,0]
            self.pic = None
            self.imgData = {}
            for x,y in product(range(self.size[0]),range(self.size[1])):
                self.setpixel(x,y,0,0,0)
    def setpixel(self,x,y,r,g,b):
        self.imgData[x,y] = tuple(map(max,zip((r,g,b),(0,0,0))))
    def getpixellist(self):
        return product(range(self.size[0]),range(self.size[1]))
    def show(self):
        tempImage = PILImage.new('RGB',self.size)
        for x,y in self.getpixellist():
            tempImage.putpixel((x,y),self.imgData[x,y])
        tempImage.show()

def get_delta(image1,image2):
    if image1.size != image2.size: raise ValueError("Image sizes do not match")
    delta = 0
    for x,y in image1.getpixellist():
        delta += sum(abs_pixel_diff(image1,image2,x,y))
    return delta / (image1.size[0] * image1.size[1])

mode = (lambda l: max([(l.count(i), i) for i in set(l)])[1] if len(set(l)) < len(l) else sum(l)/float(len(l)))

pixel_operation = lambda f: lambda image1,image2,x,y: map(f,zip(image1.imgData[x,y],image2.imgData[x,y]))
abs_pixel_diff = pixel_operation(lambda (x,y): abs(x-y))
pixel_diff = pixel_operation(lambda (x,y): x-y)
pixel_avg = pixel_operation(lambda (x,y): (x+y)/2)
pixel_sum = pixel_operation(lambda (x,y): x+y)

def image_operation(operation,image1,image2):
    if image1.size != image2.size: raise ValueError("Image sizes do not match")
    result = Image(dimensions=image1.size)
    for x,y in result.getpixellist():
        r,g,b = operation(image1,image2,x,y)
        result.setpixel(x,y,r,g,b)
    return result

get_delta_image = lambda image1,image2: image_operation(abs_pixel_diff,image1,image2)
subtract_image = lambda minuend,subtrahend: image_operation(pixel_diff,minuend,subtrahend)
average_image = lambda image1,image2: image_operation(pixel_avg,image1,image2)
add_image = lambda image1,image2: image_operation(pixel_sum,image1,image2)

def get_average_image(image_list):
    output = Image(dimensions=set1[0].size)
    for x,y in output.getpixellist():
        r,g,b = reduce(lambda a,b: (a[0]+b[0],a[1]+b[1],a[2]+b[2]),[image.imgData[x,y] for image in image_list])
        output.setpixel(x,y,r/len(image_list),g/len(image_list),b/len(image_list))
    return output

def generalized_average(image_list, combination_function, function1, function2):
    shuffle(image_list)
    set1 = image_list[0::2]
    image1 = get_average_image(set1)
    set2 = image_list[1::2]
    image2 = get_average_image(set2)
    return combination_function(function1(image1,image2),function2(image1,image2))

def average_images_add(image_list): 
    return generalized_average(image_list,add_image,average_image,subtract_image)

def average_images_sub(image_list):
    return generalized_average(image_list,subtract_image,average_image,subtract_image)

def average_images(image_list): 
    return generalized_average(image_list,subtract_image,average_image,get_delta_image)

def generalized_noisefilter(image_list,function):
    shuffle(image_list)
    set1 = image_list[0::2]
    image1 = get_average_image(set1)
    set2 = image_list[1::2]
    image2 = get_average_image(set2)
    noise = function(image1,image2)
    denoise_set = [subtract_image(image,noise) for image in image_list]
    return get_average_image(denoise_set)

def average_noisefilter_all_sub(image_list):
    return generalized_noisefilter(image_list,subtract_image)

def average_noisefilter_all_delta(image_list):
    return generalized_noisefilter(image_list,get_delta_image)

def modal_image(image_list):
    shuffle(image_list)
    result = Image(dimensions=image_list[0].size)
    for x,y in result.getpixellist():
        r = mode([image.imgData[x,y][0] for image in image_list])
        g = mode([image.imgData[x,y][1] for image in image_list])
        b = mode([image.imgData[x,y][2] for image in image_list])
        result.setpixel(x,y,r,g,b)
    return result

def test_func(f,images,original,trials=25):
    results = list()
    for x in range(trials):
        print "Trial #"+str(x+1)
        results.append(get_delta(f(images),original))
    return sum(results)/len(results)

function_list = [average_images_add,average_images_sub,average_images,average_noisefilter_all_sub,average_noisefilter_all_delta,modal_image]

def test_functions(image_list,original,functions=function_list,trials=10):
    out = ''
    for f in functions:
        out += (str(f) + ' ')
        out += (': ')
        out += (str(test_func(f,image_list,original,trials)))
        out += ('n')
    print out

Tags: code, image processing, information theory, programming, python, voting

More Stupid Higher-Order Function Tricks

posted on: Thursday, March 15, 2012 (11:23 am) by Chase Stevens

A while ago, a friend of mine who was unfamiliar with python asked me how to go about parsing a CSV into a list of lists. Moreover, the data included integers which needed to be parsed as such (and not as strings). My solution was as follows:

(lambda f: [map((lambda x: x.isdigit() and int(x) or x),(line.split(','))) for line in f.readlines()]) #takes a file
Just yesterday, another friend of mine had accidentally merged and subsequently sorted two excel spreadsheets, and overwrote one with the result. She asked if I could take the merged file and the remaining original file and re-construct the file she had overwritten. Incorporating the CSV parsing function I had written earlier, I wrote this:
from tkFileDialog import asksaveasfile, askopenfile
full = (lambda f: [map((lambda x: x.isdigit() and int(x) or x),(line.split(','))) for line in f.readlines()])(askopenfile())
exclude = (lambda f: [map((lambda x: x.isdigit() and int(x) or x),(line.split(','))) for line in f.readlines()])(askopenfile())
(lambda f: map(f.write,(map(','.join,filter((lambda x: x not in exclude),full)))))(asksaveasfile())

Tags: code, csv, filter, lambda, map, programming, python, tkinter

Plotting Simple Functions in 3 Lines with Python

posted on: Tuesday, March 6, 2012 (11:13 am) by Chase Stevens

You can hardly even tell that I took a course on functional programming last semester.

import turtle
eq = "(x ** 2) / 1.5"
map((lambda x: turtle.setx(x[0]) or turtle.sety(x[1])),[(n,(eval(eq.replace("x","(" + str(n) + ")")))) for n in range(0,100)])

Tags: code, lambda, math, programming, python