Turning your phone into a virtual-joystick

Image
Update: I kept working on this and I have released it as a package for Windows, Linux and macOS. Check it out: https://github.com/zenineasa/joystick/releases/tag/v1.0.0 -- During the days when I was pursying my master's programme, my friends and I used to occasionally go to a classroom in the university, turn on a projector, connect devices like Nintento Switch or Gaming computers and loads of joysticks, and play different simple multiplayer games; MarioKart was my favourite. From time-to-time, when I get together with people, I ponder that it would be a good idea if I bought such devices. Indeed, I do have a laptop, which could easily run such games; SuperTuxKart is similar enough to MarioKart and it can run on Linux, Windows and Mac. However, I do not have joysticks with me at the moment. Therefore, I think it would be a good idea if I simply worked on a project that would enable using our phones as joysticks. From a high-level, the plan is to host APIs on a NodeJS server that wo...

Creating a Neural Network to Predict Periodic Data

Two years ago, Arijit Mondal,  a professor of mine who was teaching us a course on Deep Learning, asked a question on how to make a Neural Network that can predict periodic data. The students started shouting out their own version of answers, most of them involved the usage of some form of recurrent neural network structure. I had a different answer.

I was not a very good student during my initial semesters during my bachelors. I missed several classes due to the lack of motivation which were supplemented by the change in environmental condition that I had been used to growing up. But, I could recollect some of the things that were discussed in a Math course, which was regarding fourier series. Just by using a bunch of sine waves, the series was able to approximate many functions to an excellent accuracy. How is this any different from the Universal Approximation Theorem that was tought during the initial lectures of this course?

It striked me - the easiest way neural network can learn periodic data is if the network itself has some kind of periodic activation function. Well, I do know that I would not have been the first person in the world to notice this, but yet again, there were a lot of people who did not notice this, and I can write an article on this so that someone in the future could come across this article and give it a thought.

I went ahead with implementing this. I did not generate the weights in a reproducible fashion, so if you try to run this at your end, you may not receive the same results as I did. I am sharing the code anyway.

# Importing the libraries
import math
import numpy as np
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Activation, Dense, LSTM, Flatten
from keras.layers.advanced_activations import LeakyReLU
from keras.utils.generic_utils import get_custom_objects
from keras import backend as K
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.utils import plot_model
import matplotlib.pyplot as plt

# Generating training data
x = []
y = []
def data_fn_njammale(variable):
    return ( (variable%10) / 10 )

for i in range(0, 1000):
    x.append(float(i)/10.0)
    y.append(data_fn_njammale(float(i)/10.0))
 
plt.plot(x,y)
plt.show()

# Neural Network Model
model = Sequential()
model.add(Dense(5, input_dim=1, activation=custom_activation))
model.add(Dense(5))
model.add(Dense(1,activation=custom_activation))
model.summary()
plot_model(model, to_file='model.png')
# Training
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
model.fit(x, y, epochs=1000, batch_size=32, verbose=2,validation_data=(x, y))

# Prediction
x_predict = []
y_act = []
for i in range(4000, 5000):
    x_predict.append(float(i)/10.0)
    y_act.append(data_fn_njammale(float(i)/10.0))
 
predict = model.predict(x_predict)
np.set_printoptions(threshold=np.inf)
plt.plot(x_predict,predict,'r')
plt.plot(x_predict,y_act,'g')
plt.show()

As you can see, the training data was generated as y = ( (x%10) / 10) for every x within the range (0, 1000) incremeted by 0.1 at a time, which is a kind of triangular wave. At the output layer, |sin(x)| was used as custom activation function. It was tested against the input values ranging from 4000 to 5000, which is way out of the training range. Here is the output plot:




Considering the fact that I wrote this network from scratch and trained it on my personal computer just to prove my point, I was pretty happy with my results. I think this could inspire someone to build an actual thing some day.

Comments

Popular posts from this blog

Created a video to help people get started with HexHoot

Turning your phone into a virtual-joystick

First impression of Lugano - Mindblowing