Robix Croupier

dealers

+

robbie

=

robix

Team

  • Will Berriel
  • Andew Klose
  • Introduction

    We plan to implement a robotic black jack dealer using the Robix Rascal robotic arm system in combonation with a digital web camera. It's movement will be predetermined for the initial dealing process, but will then respond to user input to complete the game. The most chalenging part will be to recognize what cards are dealt in order to know if a user has won or lost. Before we get into that though, we will focus on achieving desired movement.

    Background

    Emulating a Blackjack dealer is an ideal task for a simple robot, in fact a human Blackjack dealer is little more than a biological robot, following a strict set of rules that are clearly laid out. These rules can easily be programmed into a computer simulation, and this has been done hundreds of times in the past, for computer Blackjack games, since all that is needed for this to happen is a set of if else statements, it is almost trivial. The challenge behind our robotic dealer will not be with the rules, but with the physical aspects of the game, moving and recognizing the cards.

    Approach

    The project will be split into 3 parts:

    The first part will be to build an arm that suits our needs and achieve desired motions with the arm. The arm needs to be able to grasp a card, bring it in front of a camera to be identified, spin the card around to be viewed by a player, and move cards to desired positions. Because of this, most motion should be in the horizontal plane. We should right C functions to perform these basic motions, and write a simple proof of concept, perhaps picking up a lone card, spinning it around and dropping in front of a player.

    The second phase of the project will involve determining a way to use the camera to determine what cards are being dealt and getting the robotic arm to bring cards to the correct position in front of the camera. The camera, a dlink webcam can take moderate resolution pictures, but we currently can not find any libraries that will allow us to control the camera directly for direct capture. Thus, we might have to use some indirect method of reading files.

    The third phase of the project is to integrate the first two phases so that the robot will be able to deal a game of blackjack. This will involve dealing a hand both to the player and to the dealer, accepting the player's desire to recieve cards (at the very least, perhaps including splits and double downs), determining if the player has busted. If not, determinin if the dealer needs to hit, and if the dealer has busted (which will involve flipping the dealer's cards over from a table) and determining a winner. The process of dealing off of the deck will be easiest with a dealer's shoe. In that case, the motions of the arm maybe sufficient, if not, we might have to use a dc motor connected to one of the voltage out ports on the rascal (there are 2 general purpose outputs) to spin a cylinder that contacts the card and pulls it off of the shoe so that the arm can grasp it.

    While Rodney Brook's subsumption model is very elegant, it won't fit with our robot very well, as the behavior of our robot needs to be planned out well in advanced, ie, if this happens do this.
    Gat's three layer architechture is essentially how our robot works, there is the low level stuff that the robix software takes care of (pid control and such), higher level movement, which we wrote the code for, and finally very high level stuff, such as the recognition of cards using vision.
    Though it may not seem like it, our robot shares some motion characteristics with the gastrointestinal robot, two of our motors will move in conjunction in an accordion type fashon.

    Progress

    Part 1: Motion

    We have assembled a preliminary design for our robotic arm. We can control it using a special graphical program included with the Robix package. Using the included C libraries, we will soon start writing code to interface with the system.

    The robot arm has been designed to have both lateral and vertical motion. The current incarnation is attached to a pedestal with a servo, giving it 180 degrees of rotation around the pedestal. In addition, a second motor is in the lateral configuration, giving the arm a 270 degree span that it can reach. Two motors are used to control the vertical height, and when moved together, allow the arm to raise and lower itself without rotating the grasping claw. Since the motors only provide rotational force, a single joint would provide the ability to raise and lower the claw, but the claw would rotate 90 degrees when it was raised and lowered. We wanted the claw to stay at the same relative angle, no matter the orientation of the arm, to allow us to easily flip card over with it. The arm is terminated by a grasping claw that can rotate 180 degrees from a vertical to vertical configuration, allowing it to hold cards horizontally. The claw can open and close.

    This configuration should allow for the arm to grasp a card, hold it in front of a camera to view it, flip it over for the player to see, and drop the card in front of the player.

    The Robix software includes c++ libraries to control the robot. It provides a header file and a Readme which document all of the functions in the library and a sample Visual Studio project file. This project compiles under Visual Studio .NET, but the instructions, written for an earlier Visual Studio, on how to start a new project included in the documentation are out of date. In order to get a simple project to compile, we needed to perform the following steps:

    Once these steps were completed, the project finally was able to read the Robix c++ libraries and compile them. Interfacing with the arm was not especially hard once we determined that the command to execute a script didn't work as expected and instead using macros in the script (which differ slightly than just using the script) seems to be the easiest way to control the robot. Our current plan to control the robot is to write a series of macros to control basic functions of the robot, opening and closing the claw for example, followed by wrapper functions for these macros in C++. We will use macros read from a file rather than accessing the hardware directly (which is possible) because the methods for controlling the robot directly are not as robust and easy to use as the macro functions, especially when trying to use multiple motors at once.

    Part 2: Vision

    The second part of the project consists of implementing a vision system for the robot. We will be using a webcam to generate still images of cards which the robot will use to identify the cards that have been dealt. Currently, we do not foresee the need to use sensors to register any other game state (other than perhaps the player's bets). Like the Dervish robot described by Powers, et. al., we will begin with a map of the problem space and predetermined motions, but unlike Dervish, our physical problem space should not change dynamically, so we have no need to sense it, outside of low-level servo position sensing.
    We have tried to use a twain interface to our current webcam to generate the still images that we need, but our tests with the current webcam suggest that using such an interface might be impossible due to driver problems. We also tested with a creative webcam, which was able to grab images, but required human interaction in the form of multiple mouse clicks. We have the source code for the test programs and examining it should show if these user steps can be avoided.
    If the twain interface proves to be nonviable, we can also try to use a framegrabber to take out individual frames from the video stream that the camera sends to the computer. This method should work with the camera, but the source code currently does not compile on our test machine.
    A final possibility is to use some sort of webcam software to generate the images as files on the hard drive, which we can then examine.
    One key aspect that we are relying on is that the servos should be able to consistenly move to within very close of where we expect them to be. If we are taking pictures of cards, we need to assume that the cards are in view in each picture that we take. If the servos do not move to the same place, we will run into problems.

    The vision part of this project has proven to be more challenging than we first anticipated. At first we were using an older D-link model webcam to try and acquire images, however after a week or so of work we ditched it as support and getting it to work was sketchey at best. After dumping the D-link we moved on to a Veo Stingray webcam, which offered much better support. Additionally the company offers an SDK, it allows for simple use of function calls to activate the camera, start it up, take snapshots or video, and shut the camera down. Additonal problems arose from Microsoft's Visual Studio .NET IDE. Since most of the sample code that we were able to find on the web was compiled with earlier versions of visual studio they didn't want to work, or compile, with the .NET version of Visual Studio. Eventually we were able to get around these problems and get data from the webcam. We are currently having the webcam take a snapshot using Veo's API, this snapshot is saved to the hard drive as a .bmp file, we were unable to get the raw data straight from the camera. As such the way that capture will work is that we will take a snapshot using the SDK, use an image class to read the data from the .bmp file on disk, and then do analysis once we have read the image data.
    Currently due to the difficulties that we have had getting our webcam to work we wern't able to get any image analysis done, but as that is one of the only parts of our project left, along with coordinating movement, it should make for a good part 3 to our project recognizing the cards and telling the arm where to move based on that.

    Writeup #3 4/14/03

    Vision: currently we have a 1d clustering algorithm implemented, it can look through a single line of an image (in our case a card) and find the center of clusters of pixels with a low green component. It does sometimes fail to detect clusters, or detects ones that aren't there, but we feel that once it is turned into a 2d algorithm these anomolies should be taken care of.
    We discovered that if the program exits without shutting down our webcam we need to restart our computer to get things running again, this is because the Webcam takes up a memory address and if it's not shut down properly it will not give it up.
    We also ran into problems with the CImage iterator, because of these difficulties we will be accessing pixels using the pixel's (X, Y) coordinates rather than an iterator.

    Dealing cards: We now use this device ( Image 1, Image 2) to deal cards to the robix arm one card at a time. It works by using a rubber lego wheel attached to a driven axle to push a single card through a slot that is just big enough for one card to go through at a time. Cards are loaded through the top, the motor seen on top is used to add weight to the cards, to increase the friction between the wheel and the bottom card (as the machine deals from the bottom of the deck). We are still trying to find an ideal method to drive the axle that powers the wheel.

    Game code: We have completed the back-end game code, given the way that blackjack works by a few standard rules, this was probaly the simplest part of our project.

    Final 5/9/03

    Vision: The vision of the robot works very well given the right setup, namely that there is plenty of light shining on the cards. The final solution we have consists of a single lego light mounted on our card dealer. This is works sufficently well in the majority of situations, however if more lego lights were available, or a flashlight bulb, the card recognition algorithm would work even better.
    A quick oveview of how the recognition algorithm works: first it goes line by line and finds the center of clusters of low intensity (more specifially clusters with little blue or green). Once these clusters are known a two dimensional array of booleans of the same size is created denoting where the clusters are. Once we have this array we search for two dimensional clusters. We start at the top of this array/image of clusters and go across line by line, if we hit a cluster on a line we look at the line below, if there is a cluster within a horizontal threshold, we look at the line below that one, and see if there is a cluster, and so on and so forth. If this can go for a certain number of lines we declare a cluster, and then clear all clusters in a square (50 lines tall, 30 wide) so that we don't count clusters multiple times. Doing what is described we find a certain number of clusters on a card. This works well for all number cards and the aces except the ace of spades, in order to determine face cards we look at the total number of clusters on the lines, and the number of lines with out clusters. For face cards (and high value number cards) this is a relatively high ratio. So if this ratio is above a certiain threshold we then go count the number of non-white pixels in the center of the card. Since face cards havea relatively few white pixels This number is signifigantly higher that it would be for number cards, and it makes it easy to determine which are which. The Ace of spades is picked out due to the fact that this algorithm will say that it only has zero or one clusters, which makes it easy to classify as one cluster is just a normal ace, zero is a case specific to the Ace of spades.

    Arm movement: Since the arm has no sensors, we had to construct a fixed environment for it to work in. In other words if the position of the card dealer, or the platforms for the dealer hand/player hand move there are no more gaurentees. The most important of these is that the card dealer stay fixed as movement around it needs to be the most precise since there is very little extra clearance for the grabber when it gets a card.

    Tying it all together: This is where we ran into a bug that turned out to be a showstopper: we were unable to get our camerea to take more than one picture per execution of the program. Because of this we were unable to ever actually play a game of blackjack against our robot. The most fustrating thing about this was that we are literally five minutes away from having a fully functional robot as every thing else is fully functional.

    Schedule

    Midpoint:

    Vision component: White threshold for cards
    Blackjack: A functions to play the game (turn given cards into numbers, determine wins and losses, etc.)
    Robotic Component: Setting up the cards (card shoe?) to be dealt, and to be shown in front of the camera in a way that most easily allows us to determine the identity of the card.

    Final:

    Vision: Fully identifying the cards and returning a standard representation.
    Blackjack: Interacting with the player (gui?)
    Robotics: Play the game with a player.

    Media