Connect Four using Q Table Reinforcement Learning











up vote
-1
down vote

favorite












I'm trying to make a Q table based reinforcement learning algorithm play Connect Four against a Neural Network Q table. It seems to work, but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time.



I got the connect four code from this github page and modified it into a class with a couple of extra functions such that give a list of valid moves and such.



Here is my connect four class



And here is my Q table code



def trainQ(nRepetitions, learningRate, epsilonDecayFactor):
# Initialize Q
Qred = {}
Qyellow = {}
players = ['R', 'Y']
steps =
epsilon = 1
numTimesRWon = 0
numTimesYWon = 0
for step in range(nRepetitions):
game = Game()
sOld = None
aOld = None
stepCount = 0
epsilon = epsilon * epsilonDecayFactor
while not game.isDone():
player = players[stepCount % 2 == 0]
# Select next action.
if player == 'R':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qred.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qred[game.boardTup(sOld, aOld)] = Qred.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qred.get(game.boardTup(game.board, a), 0) - Qred.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if player == 'Y':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qyellow.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if True:#epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qyellow[game.boardTup(sOld, aOld)] = Qyellow.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qyellow.get(game.boardTup(game.board, a), 0) - Qyellow.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if game.checkForWin():
# Update Qold with TDerror (1-Qold)
if player == 'R':
numTimesRWon += 1
Qred[game.boardTup(sOld, aOld)] = 1 + Qred.get(game.boardTup(sOld, aOld), 0)
if player == 'Y':
numTimesYWon += 1
Qyellow[game.boardTup(sOld, aOld)] = 1 + Qyellow.get(game.boardTup(sOld, aOld), 0)
steps.append(stepCount)
print("R won: ", numTimesRWon)
print("Y won: ", numTimesYWon)
return Qred, Qyellow, steps









share|improve this question







New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • "but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time" — then it sounds like your code is not working correctly as intended, and thus is not ready for review. See the help center.
    – 200_success
    58 mins ago










  • Welcome on Code Review. I'm afraid this question does not match what this site is about. Code Review is about improving existing, working code. Code Review is not the site to ask for help in fixing or changing what your code does. Once the code does what you want, we would love to help you do the same thing in a cleaner way! Please see our help center for more information.
    – Calak
    38 mins ago















up vote
-1
down vote

favorite












I'm trying to make a Q table based reinforcement learning algorithm play Connect Four against a Neural Network Q table. It seems to work, but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time.



I got the connect four code from this github page and modified it into a class with a couple of extra functions such that give a list of valid moves and such.



Here is my connect four class



And here is my Q table code



def trainQ(nRepetitions, learningRate, epsilonDecayFactor):
# Initialize Q
Qred = {}
Qyellow = {}
players = ['R', 'Y']
steps =
epsilon = 1
numTimesRWon = 0
numTimesYWon = 0
for step in range(nRepetitions):
game = Game()
sOld = None
aOld = None
stepCount = 0
epsilon = epsilon * epsilonDecayFactor
while not game.isDone():
player = players[stepCount % 2 == 0]
# Select next action.
if player == 'R':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qred.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qred[game.boardTup(sOld, aOld)] = Qred.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qred.get(game.boardTup(game.board, a), 0) - Qred.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if player == 'Y':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qyellow.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if True:#epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qyellow[game.boardTup(sOld, aOld)] = Qyellow.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qyellow.get(game.boardTup(game.board, a), 0) - Qyellow.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if game.checkForWin():
# Update Qold with TDerror (1-Qold)
if player == 'R':
numTimesRWon += 1
Qred[game.boardTup(sOld, aOld)] = 1 + Qred.get(game.boardTup(sOld, aOld), 0)
if player == 'Y':
numTimesYWon += 1
Qyellow[game.boardTup(sOld, aOld)] = 1 + Qyellow.get(game.boardTup(sOld, aOld), 0)
steps.append(stepCount)
print("R won: ", numTimesRWon)
print("Y won: ", numTimesYWon)
return Qred, Qyellow, steps









share|improve this question







New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • "but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time" — then it sounds like your code is not working correctly as intended, and thus is not ready for review. See the help center.
    – 200_success
    58 mins ago










  • Welcome on Code Review. I'm afraid this question does not match what this site is about. Code Review is about improving existing, working code. Code Review is not the site to ask for help in fixing or changing what your code does. Once the code does what you want, we would love to help you do the same thing in a cleaner way! Please see our help center for more information.
    – Calak
    38 mins ago













up vote
-1
down vote

favorite









up vote
-1
down vote

favorite











I'm trying to make a Q table based reinforcement learning algorithm play Connect Four against a Neural Network Q table. It seems to work, but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time.



I got the connect four code from this github page and modified it into a class with a couple of extra functions such that give a list of valid moves and such.



Here is my connect four class



And here is my Q table code



def trainQ(nRepetitions, learningRate, epsilonDecayFactor):
# Initialize Q
Qred = {}
Qyellow = {}
players = ['R', 'Y']
steps =
epsilon = 1
numTimesRWon = 0
numTimesYWon = 0
for step in range(nRepetitions):
game = Game()
sOld = None
aOld = None
stepCount = 0
epsilon = epsilon * epsilonDecayFactor
while not game.isDone():
player = players[stepCount % 2 == 0]
# Select next action.
if player == 'R':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qred.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qred[game.boardTup(sOld, aOld)] = Qred.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qred.get(game.boardTup(game.board, a), 0) - Qred.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if player == 'Y':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qyellow.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if True:#epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qyellow[game.boardTup(sOld, aOld)] = Qyellow.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qyellow.get(game.boardTup(game.board, a), 0) - Qyellow.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if game.checkForWin():
# Update Qold with TDerror (1-Qold)
if player == 'R':
numTimesRWon += 1
Qred[game.boardTup(sOld, aOld)] = 1 + Qred.get(game.boardTup(sOld, aOld), 0)
if player == 'Y':
numTimesYWon += 1
Qyellow[game.boardTup(sOld, aOld)] = 1 + Qyellow.get(game.boardTup(sOld, aOld), 0)
steps.append(stepCount)
print("R won: ", numTimesRWon)
print("Y won: ", numTimesYWon)
return Qred, Qyellow, steps









share|improve this question







New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I'm trying to make a Q table based reinforcement learning algorithm play Connect Four against a Neural Network Q table. It seems to work, but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time.



I got the connect four code from this github page and modified it into a class with a couple of extra functions such that give a list of valid moves and such.



Here is my connect four class



And here is my Q table code



def trainQ(nRepetitions, learningRate, epsilonDecayFactor):
# Initialize Q
Qred = {}
Qyellow = {}
players = ['R', 'Y']
steps =
epsilon = 1
numTimesRWon = 0
numTimesYWon = 0
for step in range(nRepetitions):
game = Game()
sOld = None
aOld = None
stepCount = 0
epsilon = epsilon * epsilonDecayFactor
while not game.isDone():
player = players[stepCount % 2 == 0]
# Select next action.
if player == 'R':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qred.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qred[game.boardTup(sOld, aOld)] = Qred.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qred.get(game.boardTup(game.board, a), 0) - Qred.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if player == 'Y':
moves = game.validMoves()
# List of Q values for valid moves
moveQ = [Qyellow.get(game.boardTup(game.board, move), 0) for move in moves]
ranNum = np.random.random()
if True:#epsilon > ranNum:
a = moves[np.random.choice(len(moveQ))]
else:
a = moves[np.argmax(np.array(moveQ))]
# If not first step, update Qold with TD error (1 + Qnew - Qold)
if sOld != None:
Qyellow[game.boardTup(sOld, aOld)] = Qyellow.get(game.boardTup(sOld, aOld), 0) + learningRate *
(1 + Qyellow.get(game.boardTup(game.board, a), 0) - Qyellow.get(game.boardTup(sOld, aOld), 0))
# Shift current board and action to old ones.
sOld, aOld = game.board, a
# Apply action to get new board.
game.insert(a, player)
stepCount += 1
if game.checkForWin():
# Update Qold with TDerror (1-Qold)
if player == 'R':
numTimesRWon += 1
Qred[game.boardTup(sOld, aOld)] = 1 + Qred.get(game.boardTup(sOld, aOld), 0)
if player == 'Y':
numTimesYWon += 1
Qyellow[game.boardTup(sOld, aOld)] = 1 + Qyellow.get(game.boardTup(sOld, aOld), 0)
steps.append(stepCount)
print("R won: ", numTimesRWon)
print("Y won: ", numTimesYWon)
return Qred, Qyellow, steps






python reinventing-the-wheel connect-four






share|improve this question







New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 1 hour ago









Cepheid

1




1




New contributor




Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Cepheid is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • "but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time" — then it sounds like your code is not working correctly as intended, and thus is not ready for review. See the help center.
    – 200_success
    58 mins ago










  • Welcome on Code Review. I'm afraid this question does not match what this site is about. Code Review is about improving existing, working code. Code Review is not the site to ask for help in fixing or changing what your code does. Once the code does what you want, we would love to help you do the same thing in a cleaner way! Please see our help center for more information.
    – Calak
    38 mins ago


















  • "but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time" — then it sounds like your code is not working correctly as intended, and thus is not ready for review. See the help center.
    – 200_success
    58 mins ago










  • Welcome on Code Review. I'm afraid this question does not match what this site is about. Code Review is about improving existing, working code. Code Review is not the site to ask for help in fixing or changing what your code does. Once the code does what you want, we would love to help you do the same thing in a cleaner way! Please see our help center for more information.
    – Calak
    38 mins ago
















"but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time" — then it sounds like your code is not working correctly as intended, and thus is not ready for review. See the help center.
– 200_success
58 mins ago




"but when I try to use the Q table against a opponent that randomly picks moves, it loses almost every time" — then it sounds like your code is not working correctly as intended, and thus is not ready for review. See the help center.
– 200_success
58 mins ago












Welcome on Code Review. I'm afraid this question does not match what this site is about. Code Review is about improving existing, working code. Code Review is not the site to ask for help in fixing or changing what your code does. Once the code does what you want, we would love to help you do the same thing in a cleaner way! Please see our help center for more information.
– Calak
38 mins ago




Welcome on Code Review. I'm afraid this question does not match what this site is about. Code Review is about improving existing, working code. Code Review is not the site to ask for help in fixing or changing what your code does. Once the code does what you want, we would love to help you do the same thing in a cleaner way! Please see our help center for more information.
– Calak
38 mins ago















active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");

StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Cepheid is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208971%2fconnect-four-using-q-table-reinforcement-learning%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes








Cepheid is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Cepheid is a new contributor. Be nice, and check out our Code of Conduct.













Cepheid is a new contributor. Be nice, and check out our Code of Conduct.












Cepheid is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Code Review Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208971%2fconnect-four-using-q-table-reinforcement-learning%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

404 Error Contact Form 7 ajax form submitting

How to know if a Active Directory user can login interactively

TypeError: fit_transform() missing 1 required positional argument: 'X'