P2P Zero-Knowledge-Proof based Opensource Social Network - HexHoot

I find that the domain name that I purchased on an impulse, hexhoot.com, would be the ideal name for the p2p social network; both of which I described in some of my previous posts. I have been working on it during my pasttime for about a month now, and I decided to make it opensource. You can have a look at the project using the following link: https://github.com/zenineasa/hexhoot I have attempted to follow all the best development practices as much as I can. I have written tests, and, enabled continuous integration feature in GitHub to run all the tests, lint and copyright checks for the code changes that is being made. I also have captured all the foreseeable tasks in a Trello dashboard. This helps me keep track of all the bugs that I have detected and all the important tasks that need to be completed. There are quite a lot of tasks left to make this bug-free and feature-rich. I hope I will find enough time and motivation to do the same in the coming days.

Creating a Neural Network to Predict Periodic Data

Two years ago, Arijit Mondal,  a professor of mine who was teaching us a course on Deep Learning, asked a question on how to make a Neural Network that can predict periodic data. The students started shouting out their own version of answers, most of them involved the usage of some form of recurrent neural network structure. I had a different answer.

I was not a very good student during my initial semesters during my bachelors. I missed several classes due to the lack of motivation which were supplemented by the change in environmental condition that I had been used to growing up. But, I could recollect some of the things that were discussed in a Math course, which was regarding fourier series. Just by using a bunch of sine waves, the series was able to approximate many functions to an excellent accuracy. How is this any different from the Universal Approximation Theorem that was tought during the initial lectures of this course?

It striked me - the easiest way neural network can learn periodic data is if the network itself has some kind of periodic activation function. Well, I do know that I would not have been the first person in the world to notice this, but yet again, there were a lot of people who did not notice this, and I can write an article on this so that someone in the future could come across this article and give it a thought.

I went ahead with implementing this. I did not generate the weights in a reproducible fashion, so if you try to run this at your end, you may not receive the same results as I did. I am sharing the code anyway.

# Importing the libraries
import math
import numpy as np
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Activation, Dense, LSTM, Flatten
from keras.layers.advanced_activations import LeakyReLU
from keras.utils.generic_utils import get_custom_objects
from keras import backend as K
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.utils import plot_model
import matplotlib.pyplot as plt

# Generating training data
x = []
y = []
def data_fn_njammale(variable):
    return ( (variable%10) / 10 )

for i in range(0, 1000):
    x.append(float(i)/10.0)
    y.append(data_fn_njammale(float(i)/10.0))
 
plt.plot(x,y)
plt.show()

# Neural Network Model
model = Sequential()
model.add(Dense(5, input_dim=1, activation=custom_activation))
model.add(Dense(5))
model.add(Dense(1,activation=custom_activation))
model.summary()
plot_model(model, to_file='model.png')
# Training
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
model.fit(x, y, epochs=1000, batch_size=32, verbose=2,validation_data=(x, y))

# Prediction
x_predict = []
y_act = []
for i in range(4000, 5000):
    x_predict.append(float(i)/10.0)
    y_act.append(data_fn_njammale(float(i)/10.0))
 
predict = model.predict(x_predict)
np.set_printoptions(threshold=np.inf)
plt.plot(x_predict,predict,'r')
plt.plot(x_predict,y_act,'g')
plt.show()

As you can see, the training data was generated as y = ( (x%10) / 10) for every x within the range (0, 1000) incremeted by 0.1 at a time, which is a kind of triangular wave. At the output layer, |sin(x)| was used as custom activation function. It was tested against the input values ranging from 4000 to 5000, which is way out of the training range. Here is the output plot:




Considering the fact that I wrote this network from scratch and trained it on my personal computer just to prove my point, I was pretty happy with my results. I think this could inspire someone to build an actual thing some day.

Comments

Popular posts from this blog

Regarding a Covid-19 related project that I worked on a few months ago

Went into the University for the first time

P2P Zero-Knowledge-Proof based Opensource Social Network - HexHoot