I have a passion for Data Science, Innovation, Leadership, and trying to make the most out of every situation. You will see these themes reflected in these blogs.

## The Power of Code Snippets

Modern day programmers write a lot of code. That’s part of the job. But one of the main principles of writing good code is the DRY principle: Don’t Repeat Yourself (see The DRY Principle: R Functions or The DRY Principle: Python Functions or Wikipedia or any number of other online sources). In essence, this principle effectively states that if you are going to write the same code twice, then don’t; instead, write the code in a function, then call the function twice.

When approaching this from a Data Science perspective, I will focus on the two most popular coding languages in this field, and the popular IDE for each of them: R + RStudio and Python + VSCode.

## Mental Models and Social Situations

Our mental models are deeply engrained images of how we see the world and how we to react in different scenarios and situations (McShane, Travaglione & Olekalns 2010, p. 91; Senge 2006, p. 164).

When I meet someone for the first time, my mental models greatly influence my perceptions of them; their words, their expressions, their reactions. And by extension, these mental models then influence my thoughts and actions in that situation; how I behave, how I speak, even my subconscious mannerisms. As a result, that first meeting can make for a pleasant first impression, or an embarrassingly regrettable occasion.

Conversely, I believe that my mental models also limit my perceptions of the other people I meet. This is because I am basing my conclusions on two things: a single, short meeting, and on my filtered model of the world which has been developed over my entire lifetime. Which leaves me with limited inferential flexibility, and the possibility of a prejudicial conclusion (Ormerod 2000, cited in Johnson-Laird 2001, p. 436; Markman & Gentner 2001, p. 230).

## Reinforcement Learning

Reinforcement Learning is not a new concept, but has been developed and matured over 70 years of academic rigour. Fundamentally, Reinforcement Learning is a method of machine learning by which an algorithm can make decisions and take actions within a given environment, and learns what appropriate decisions to make through repeated trial-and-error actions. The academic discourse for Reinforcement Learning pursued three concurrent ‘threads’ of research (trial and error, optimal control, and temporal difference), before being united in the research in the 1990’s. Reinforcement Learning was then able to proceed to mastering the playing of Chess, and of Go, and of countless electronic games. The modern applications of Reinforcement Learning are enabling businesses to optimise, control, and monitor their respective processes, to a phenomenal level of accuracy and finesse. As a result, the future of Reinforcement Learning is both exciting and fascinating, as the research aims to improve the algorithm’s interpretability, accountability and trustworthiness.

## Vanilla Neural Networks in R

The purpose of this paper is to create a ‘back to basics’ approach to designing Deep Learning solutions. The intention is not to create the most predictive model, nor is it to use the latest and greatest techniques (such as convolution or recursion); but the intention is to create a basic neural network, from scratch, using no frameworks, and to walk through the methodology.

## Exploring Undernourishment

Our World today has many issues, and the Prevalence of Undernourishment is just one of them. Over the past twenty years, the United Nations, through the Food and Agriculture Organisation (FAO), has collected data on many countries, and have helped to influence and improve their Undernourishment overall.

This report, and the associated Data Exploration App, looks to explore the data provided by the FAO, and to understand it’s nuances, to learn what information it is telling us, and to derive meaning from it. The research activities undertaken focus on four key areas:

This report embarks on an exploratory data analysis through the narratives told by this data.

## Addressing the John Smith Problem

Many databases have duplicate data. Especially if manual data entry is required. In order to clean the data and to resolve unnecessary duplicates, it is necessary to identify and rectify messy data. However, many duplicates are non-matching; meaning there could be duplicate data that contains, for example, spelling errors. It is challenging to identify these duplicates perfectly using the SQL database language because this relies on exact matching (due to the tenets of Relational Database theory). Therefore, it is necessary to look for other methods of identifying non-matching duplicates, which is where Fuzzy Matching is able to be used.

## Reinforcement Learning in the Warehousing Industry

Artificial Intelligence and Machine Learning is advancing at an ever-increasing rate. Reinforcement Learning (RL) is one area of Machine Learning which is proving to be incredibly promising for the future of business efficiency and optimisation. Within the Warehousing and Logistics industry, there are some unique challenges, some of which can be addressed and improved with the application of Reinforcement Learning. One of these examples is the Picking and Putaway strategies which are implemented within modern Warehouse Management systems. If a Reinforcement Learning algorithm were to be developed to address this scenario, the benefits to businesses would improve efficiency and profitability. However, Reinforcement Learning has some nuanced difficulties which will need to be handled when scaling a solution like this to a production-ready environment.

## The Future of Australian Energy Prices

In recent months, media outlets have notified the public about fluctuations in energy prices with headlines such as Australia’s High Electricity Prices the ‘New Normal’, Report Says (Hutchens, 2018), and Higher Energy Prices are Here to Stay — Here’s What We Can Do About It (Percival, 2018) and ‘No Likelihood of Relief Ahead’: Future Power Prices Continue to Rise (Latimer, 2018). These articles create a sense of concern due to the impact to Australian’s financial wellbeing. However, there is very little fact in these articles that are grounded in statistical evidence.

While these articles may have lacked academic rigour and rhetoric, the sentiment is still reflected in academic literature. Sardar (2015) justifies in his 2015 article entitled Research and Development, Welfare and Efficiency: An Australian Energy Perspective that increasing numbers of Australians are being driven to welfare as a direct result of Energy Prices. Moreover, in a 2017 article entitled Australian Energy Policy and Economic Rationalism, Horan et al. (2017) accuse the Australian Government of having irrational and inefficient energy policy, which is placing increasing and unnecessary financial pressure on Australian households and businesses. Furthermore, Lincoln (2012) proposes a succinct set of options for change which may curb this pressure, as articulated in the his article Options for Change in the Australian Energy Profile. As shown, the landscape of the Australian Energy Market environment is changing, and this trend may have dire consequences for the future of the Australian economy.

Therefore, with the intent to add some statistical rigour to the discourse around the Australian Energy Prices, this paper aims to model the aggregated monthly Energy Point Price in order to create a prediction of the for the future. The data is extracted from an Australian Government website, visualised, analysed, tested, and then forecast, in order to create such prediction. The resulting prediction will allow citizens to adequately plan for the future, and can also provide advice back to Governmental Agencies in order to advise future policy.