Knight Probability In Chessboard


  • 0

    Click here to see the full article post


  • 0
    Y

    Why the final answer is sum(map(sum, dp))? Should it be dp[r][c]?


  • 0

    @yunli We want the probability the knight remained on the board after K moves. The probability the knight remained and is on the square (r, c) is dp[r][c]. So we want the sum of all dp[r][c].


  • 0
    L

    Nice explanation about matrix exponential, though I am a little confused with the canonical thing. I implemented my own matrix exponential but got TLE, I guess that's why I did not optimize it exploiting the symmetry thing.


  • 0

    Maybe instead of storing the NxN board the solution should only store a map of positions.

    map< pair< int, int >, double > positions;
    

  • 0
    S

    This problem can be viewed as a time-homogeneous Markov chain with a finite state space. There are 65 states (0-64): being on the square at row r and column c (states 1-64) or being off the board (state 0).

    For i > 0 state i corresponds to the square in row and column (base zero) given by (in Ruby) (i-1).divmod(8). For example, (1-1).divmod(8) #=> [0,0]; (26-1).divmod(8) #=> [3,1]; (65-1).divmod(8) #=> [7,7].

    Let M be the transition matrix, where m(i,j) is the probability of going from state i to state j in one move. m(0,0) = 1 and m(0,j) = 0 for all j > 0, since the knight is not permitted to move from state 0 to any other state (that is, state 0 is an absorbing state). If states i and j correspond to squares on the board, m(i,j) = 1.0/8 if state j can be reached from state i in one move. and equals zero otherwise. m(i,0) is computed in the obvious way.

    Let p equal a discrete probability distribution of the initial state. Suppose the starting square is [r,c] (zero based), which is state s = 8*r+c-1. Then p(s) = 1.0 and p(j) = 0 for j != s. The probability distribution of the knight's location (state) after one move is therefore p*M, after two moves is p*M*M and after n moves is q(n) = p*M**n. q(n)[0] (1-q(n)[0]) is therefore the probability of being off (on) the board after n moves.

    q(n) converges to a steady-state vector q as n goes to infinity. We can compute q from the observation that, at steady-state, q*M = q and q*i = 1, where i is an n-vector of 1's. We can then writeq*(M-I) = 0, where I is an identity matrix and 0 is a vector of zeros. Now add another row of 1's to M-I to obtain the matrix Q. We now have q*Q = v, where v is a n+1-vector whose last element is 1 and all other elements are 0. We now post-multiply both sides by T, the transpose of Q, to obtain q*Q*T = v*T (Q*T being a square matrix), so q = v*T*Z where Z is the inverse of Q*T.

    We don't have to go to the trouble of calculating q this way, however. When there is a single absorbing state which, as here, can be reached from any other state (after one or more moves) with a positive probability, we are bound to get to the absorbing state eventually, so we know that q[0] = 1 and q[i] = 0 for i > 0. I presented the general calculation for interest only.


  • 0
    S

    Has anyone tried to use recursion to solve it?


  • 0
    Z

    @surikaks
    There you go

    def knightProbability(self, N, K, r, c):
        directions = ((2,1),(2,-1),(-2,1),(-2,-1),(1,2),(1,-2),(-1,2),(-1,-2))
        def search(row, col, k, cache={}):
            if (row, col, k) in cache: return cache[(row, col, k)]
            if k == 0: return 1.
            p = 0
            for (i, j) in map(lambda (x, y): (x + row, y + col), directions):
                if 0 <= i < N and 0 <= j < N:
                    p += 1/8. * search(i, j, k - 1)
            return cache.setdefault((row, col, k), p)
        return search(r, c, K)

Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.