Cost Analysis

Total Costs: $21.86

The costs for the system that I have implemented are low. The basic materials included the following. The only necessary hardware is a computer (which was accessible with no prior expense), and the software I chose to utilize was Python (a free software that can be downloaded from online). Additionally, I utilized GitHub as a code repository and PyCharm as IDE.  These components also incurred no expense. The cost of the data that I utilized was also free and open to public use as all historical stock price data was downloadable from YahooFinance in the form of CSV files. These components adding to no expense met the design requirement that utilizing the system should incur little cost on the user. Behind this reasoning is the fact that the system is meant to be used to predict stock prices to optimize stock trades and maximize profit. Part of maximizing profit is making sure the system in place does not cause extra unneeded expense.

The only portion of the system that added cost was the use of Amazon Web Services (AWS). I used AWS for its Elastic Compute Cloud (EC@), which is a virtual server for running applications on the AWS infrastructure. More specifically, I utilized the p2x.large instance, which is a general GPI (graphics processing unit) instance for running machine learning application. The cost of this was 1.084 $/hr. Utilization of a GPU for deep learning networks is useful because of its parallel architecture, which is similar to a Deep Neural Network (DNN). DNNs have a high amount of structural complexity due to the number of inputs, the number of hidden layers, the activation functions, etc. The training method in particular is subject to this complexity because training combines our input features, loss function, and optimization algorithm to train the weights of the network through numerous iterations through all of the data. Thus, as the number of parameters and neurons increases, so does the complexity and corresponding training time [37]. Another advantage of the GPU is memory. Memory is an issue for DNNs due to the huge amounts of inputs, weights, and activation functions. Computer architectures have developed for mainly serial processing and DRAM (dynamic random access memory) meant for high density memory, so DNNs cause a bottleneck between them. Building memory into conventional processors is a possible solution, but this memory on-chip is expensive. There are two ways for the neural network to process a lot of data: CPU and GPU. The CPU is not ideal for the DNN because it is better for computations on small bursts of data and struggles with huge matrix multiplication. On the other hand, GPUs are a promising solution because they are designed to handle parallel computations with large amounts of data. Since have efficient matrix multiplication, GPUs are more suited to the training requirements of a DNN (where the feed-forward portion is a series of matrix multiplications) and would significantly cut down on the time needed for training and testing [38].  Because of the time constraints I faced, the large quantities of data and amount of tests I had to run, AWS was necessary for speed this process. Overall, running AWS to complete all of these tasks and come up with a final output incurred a cost of $21.86, which is a small cost relative to the practical potential gains from investment through the use of predictive modeling in stock trading.

Skip to toolbar