TensorFaux

/ˈtensərfō/


A knock-off of TensorFlow Keras.

A neural network implementation which utilizes a similar API to TensorFlow Keras, the high-level API of TensorFlow. Everything here is written from scratch in Python and Numpy, which allows these complex models to be more interpretable.

Installation

This library is designed to be as simple as possible, so the only Python dependency is Numpy. Simply running the code should be enough, however, initializing a virtual environment in your directory would be best practice.

pip3 install tensorfaux

Why did I make this?

The purpose of this project was for me to better understand what happens behind the scenes when I’m working with a high-level deep learning package such as TensorFlow Keras. The goal is to mimic most of the fundamental features of Keras.

Features

Despite being a small library, the simplicity and flexibility of this library makes it a (potentially) good learning tool for better understanding the fundamentals of deep learning.

Simple

Not only does the API reduce cognitive load in its simplicity, the respective code is also built to be as simple as possible. Instead of having to dive into the low-level tensor operations, exportable graphs and hardware support, you can look directly at the fundamental linear algebra and calculus that brings deep learning to life.

Flexible

The source code could be easily modified to support new features and bug fixes. This flexibility allows anyone to enhance this project with a relatively gentle learning curve.

Sample Usage

Below is an example of a simple neural network optimized with stochastic gradient descent, learning the XOR function. Although seemingly trivial, the XOR function isn’t linearly separable, meaning linear models such as logistic regression and single-layer perceptrons cannot learn XOR.

import numpy as np
from tensorfaux.layers import Input, Dense, Tanh
from tensorfaux.models import Sequential
from tensorfaux.optimizers import SGD

np.random.seed(42)

# Data
X = np.reshape([[0, 0], [0, 1], [1, 0], [1, 1]], (4, 2, 1))
Y = np.reshape([[0], [1], [1], [0]], (4, 1, 1))

# Instantiation
model = Sequential([
    Dense(2, 3),
    Tanh(),
    Dense(3, 1),
    Tanh(),
])
model.compile(optimizer=SGD(learning_rate=0.01, batch_size=3))

# Training
model.fit(X, Y, epochs=10000)

# Prediction
Y_pred = model.predict(X)
for (y_true, y_pred) in zip(Y, Y_pred):
    print(f'Actual: {y_true}, Predicted: {y_pred}')

Output:

Actual: [[0]], Predicted: [[0.0003956]]
Actual: [[1]], Predicted: [[0.97018558]]
Actual: [[1]], Predicted: [[0.97092169]]
Actual: [[0]], Predicted: [[0.00186825]]