Rubik's Cube Solving Robot

Autonomous robot capable of scanning and solving a Rubik’s cube.

Introduction

In this project, we will build a robot which can autonomously scan the state of a Rubik's cube, and rotate the faces to solve it.

Solving the Rubik's Cube

Below are examples of images from the MNIST dataset along with their corresponding labels.

Design Approach

We will use a simple feedforward neural network with one hidden layer. The architecture is as follows:

The network uses the sigmoid activation function and is trained using stochastic gradient descent.

Mechanical Design

The output of the neural network is computed as follows:

$$\begin{align*} \mathbf{z}^{(1)} &= \mathbf{W}^{(1)} \mathbf{x} + \mathbf{b}^{(1)} \\ \mathbf{a}^{(1)} &= \sigma(\mathbf{z}^{(1)}) \\ \mathbf{z}^{(2)} &= \mathbf{W}^{(2)} \mathbf{a}^{(1)} + \mathbf{b}^{(2)} \\ \mathbf{a}^{(2)} &= \sigma(\mathbf{z}^{(2)}) \end{align*}$$

Where:

The loss is calculated using the cross-entropy function, and gradients are computed for backpropagation.

Electrical Design

The following Python code demonstrates the implementation of the neural network described above.


      import numpy as np
      
      class NeuralNetwork:
          def __init__(self):
              # Initialize weights and biases
              self.W1 = np.random.randn(128, 784) * 0.01
              self.b1 = np.zeros((128, 1))
              self.W2 = np.random.randn(10, 128) * 0.01
              self.b2 = np.zeros((10, 1))
      
          def sigmoid(self, z):
              return 1 / (1 + np.exp(-z))
      
          def feedforward(self, x):
              z1 = np.dot(self.W1, x) + self.b1
              a1 = self.sigmoid(z1)
              z2 = np.dot(self.W2, a1) + self.b2
              a2 = self.sigmoid(z2)
              return a2
          

This class initializes the network parameters and defines methods for the activation function and feedforward computation.

Software Design

We will use a simple feedforward neural network with one hidden layer. The architecture is as follows:

The network uses the sigmoid activation function and is trained using stochastic gradient descent.

Construction and Testing

We will use a simple feedforward neural network with one hidden layer. The architecture is as follows:

The network uses the sigmoid activation function and is trained using stochastic gradient descent.

Final Working Robot

We will use a simple feedforward neural network with one hidden layer. The architecture is as follows:

The network uses the sigmoid activation function and is trained using stochastic gradient descent.

Improvements

We will use a simple feedforward neural network with one hidden layer. The architecture is as follows:

The network uses the sigmoid activation function and is trained using stochastic gradient descent.

Conclusion

We have explored the fundamentals of neural networks by building and training a model to classify handwritten digits. The concepts and implementations provided lay the groundwork for more advanced studies in deep learning.