Author: Brian A. Ree
                            
                            
                            
                            
                            0: Tensorflow Logistic Regression: LoadTensorData
                            
                                One small change we had to make to our LoadTensorData class, which won't affect older regression tutorials, is the structure of the answers tensor object.
                                The new code is shown below. The extra use of the list variable forces a tensor object shape of (X, 1), which is necessary for performing the sigmoid_cross_entropy_with_logits
                                call.
                            
                            
                            
def generateData(self, lColumnMap=[]):
    print ("")
    print ("")
    print("Generating Tensor Data:")
    self.resetRows()
    self.columnMap = lColumnMap
    self.dataModelColCount = len(self.columnMap)
    self.dataModelAnsColCount = 2  # category yes, category no
    # Prep training date
    tp = int(self.loadFeatureData.rowCount * self.trainPrct)
    vp = int(self.loadFeatureData.rowCount * self.validatePrct)
    # Convert base data
    val2 = []
    val3 = []
    rowcnt = 0
    for row in self.loadFeatureData.rows:
        val = []
        for col in self.columnMap:
            val.append(float(row.getMemberByName(col)))
        # efl
        val2.append(val)
        val4 = []
        val4.append(float(row.getMemberByName('AnswerCatYes')))
        val4.append(float(row.getMemberByName('AnswerCatNo')))
        val3.append(val4)
        rowcnt += 1
    # efl
    self.rows = tf.to_float(val2)
    self.answers = tf.to_float(val3)
    self.rowCount = rowcnt
    print("TensorRow Answer Shape: %s" % self.answers.get_shape())
    print("TensorRow Data Shape: %s" % self.rows.get_shape())
    print('TensorRow Count: %i' % (self.rowCount))
    # Convert train data
    val2 = []
    val3 = []
    rowcnt = 0
    rc = 0
    for row in self.loadFeatureData.rows:
        if rowcnt < tp:
            val = []
            for col in self.columnMap:
                val.append(float(row.getMemberByName(col)))
            # efl
            val2.append(val)
            val4 = []
            val4.append(float(row.getMemberByName('AnswerCatYes')))
            val4.append(float(row.getMemberByName('AnswerCatNo')))
            val3.append(val4)
            rc += 1
        # eif
        rowcnt += 1
    # efl
    self.train = tf.to_float(val2)
    self.trainAnswers = tf.to_float(val3)
    self.trainCount = rc
    print("TensorTrain Answer Shape: %s" % self.trainAnswers.get_shape())
    print("TensorTrain Data Shape: %s" % self.train.get_shape())
    print('TensorTrain Count: %i' % (self.trainCount))
    # Convert validate data
    val2 = []
    val3 = []
    rowcnt = 0
    rc = 0
    for row in self.loadFeatureData.rows:
        if rowcnt > tp and rowcnt < (tp + vp):
            val = []
            for col in self.columnMap:
                val.append(float(row.getMemberByName(col)))
            # efl
            val2.append(val)
            val4 = []
            val4.append(float(row.getMemberByName('AnswerCatYes')))
            val4.append(float(row.getMemberByName('AnswerCatNo')))
            val3.append(val4)
            rc += 1
        # eif
        rowcnt += 1
    # efl
    self.validate = tf.to_float(val2)
    self.validateAnswers = tf.to_float(val3)
    self.validateCount = rc
    print("TensorValidate Answer Shape: %s" % self.validateAnswers.get_shape())
    print("TensorValidate Data Shape: %s" % self.validate.get_shape())
    print('TensorValidate Count: %i' % (self.validateCount))
# edef
                             
                            
                            
                                The lines...
                                val4 = []
                                val4.append(float(row.getMemberByName('Answer')))
                                val3.append(val4)
                                Force the extra structure we need. It's a small change we've made to each of the tensor loading loops in this class.
                                Next up we'll review the code change to allow our logistic regression model to run in our Main executable.
                            
                            
                            
elif model_type == 'logistic_regression':
    tfModel = RegModelLogistic.RegModelLogistic(verbose, tData, lin_reg_positive_result, randomSeed, trainStepsMultiplier, logPrint, learning_rate, evalType)
    tfModel.startTraining()
                             
                            
                            
                                That's pretty much the only new code we'll need in this class now that we have our execution dictionary setup. You'll have to import
                                the new regression class though, or you'll get some errors. Next up we'll review the actual RunModelLogistic class. We'll post the class source
                                code below and follow it with an opening discussion of the class variables and constructor arguments.
                            
                            
                            1: Tensorflow Linear Regression: Model Details
                            
                                You can see that our new LoadTensorData class takes a trainPrct and a validatePrct argument as well as our LoadFeatureData class instance.
                                Now let's take a look at the RegModelLogistic class which stands for regression model logistic. It takes our LoadTensorData instance as an argument
                                as well as a few other arguments. Let's list them here.
                            
                            
                            
                                - verbose: Boolean flag that indicates if we should use verbose logging for extra debugging information.
 
                                - tData: This is an instance of our LoadTensorData class and will be used to access the tensor data we've prepared.
 
                                - log_reg_positive_result: This is determines the threshold to trigger a yes inference.
 
                                - randomSeed: This is a boolean flag that indicates if we should try to prep our weights with a ramdom seed value.
 
                                - trainStepsMultiplier: This is an integer value that multiplies the training steps by itself to set the total number of training steps. Training steps are calculated from the size of the training set.
 
                                - logPrint: This is a value controlling how often the error is printed to the console as the model is trained.
 
                                - learning_rate: This is an important argument, it controls how quickly the model learns. However this should be a very small incremental value like 0.000001 or so depending on your data.
 
                                - evalType: This is a string representing a specific evaluation to run for the given model. This allows us to specify special custom checks in our model's code.
 
                            
                            
                            
                                So let's take a look at our RegModelLogistic implementation. One quick note is that we're going to overlook the checkpoint code for now.
                                This feature allows the code to save it's trained model at a checkpoint so that we can save any precious time spent training our model.
                                I've played with the code and it does seem to do it's job but I will leave that as an exploration exercise for you.
                            
                            
                            
import tensorflow as tf
import os
class RegModelLogistic:
    """ A general implementation of a TensorFlow logistic regression model. """
    verbose = False
    w = None
    b = None
    dataModelColCount = 0
    dataModelAnsColCount = 0
    totalTrainingSteps = 1000
    trainStepsMultiplier = 1
    checkpoint = False
    randomSeed = False
    learning_rate = 0.0000001
    positiveResult = 0.55
    loadTensorData = None
    logPrint = 100
    evalType = ''
    def __init__(self, lVerbose=False, lLoadTensorData=None, lPositiveResult=0.50, lRandomSeed=False, lTrainStepsMultiplier=1.0, lLogPrint=100, lLearningRate=0.0000001, lEvalType=''):
        self.verbose = lVerbose
        self.learning_rate = lLearningRate
        self.loadTensorData = lLoadTensorData
        self.positiveResult = lPositiveResult
        self.randomSeed = lRandomSeed
        self.trainStepsMultiplier = lTrainStepsMultiplier
        self.dataModelColCount = self.loadTensorData.dataModelColCount
        self.dataModelAnsColCount = self.loadTensorData.dataModelAnsColCount
        self.logPrint = lLogPrint
        self.evalType = lEvalType
    # edef
    def combine_inputs(self, x):
        return tf.matmul(x, self.w) + self.b
    # edef
    def inference(self, x):
        # Compute inference model over data x and return the result.
        return tf.nn.softmax(self.combine_inputs(x))
    # edef
    def loss(self, x, y):
        # Compute loss over training data x and expected outputs y.
        return tf.reduce_mean(-tf.reduce_sum(y * tf.log(self.inference(x)), reduction_indices=1))
    # edef
    def inputs(self):
        # Read/generate input training data x and expected outputs y.
        return self.loadTensorData.train, self.loadTensorData.trainAnswers
    # edef
    def train(self, totalLoss):
        return tf.train.GradientDescentOptimizer(self.learning_rate).minimize(totalLoss)
    # edef
    def evaluate(self, sess, test_x, test_y):
        Y_predicted = tf.equal(tf.argmax(self.inference(test_x), 1), tf.argmax(test_y, 1))
        mse = tf.reduce_mean(tf.cast(Y_predicted, tf.float32))
        print('Accuracy: %.4f' % sess.run(mse))
    # edef
    def startTraining(self):
        self.totalTrainingSteps = int(self.loadTensorData.trainCount * self.trainStepsMultiplier)
        print('Found training steps: %i' % self.totalTrainingSteps)
        with tf.Session() as sess:
            print('Found tensor dimension: %ix%i' % (self.dataModelColCount, self.dataModelAnsColCount))
            if self.randomSeed == True:
                self.w = tf.Variable(tf.random_normal([self.dataModelColCount, self.dataModelAnsColCount], stddev=0.5), name='weights')
                self.b = tf.Variable(tf.random_normal([self.dataModelAnsColCount], stddev=0.5), name='bias')
            else:
                self.w = tf.Variable(tf.zeros([self.dataModelColCount, self.dataModelAnsColCount]), name='weights')
                self.b = tf.Variable(tf.zeros([self.dataModelAnsColCount]), name='bias')
            # eif
            print("Weight: %s" % self.w.get_shape())
            print("Bias: %s" % self.b.get_shape())
            # Model setup
            tf.global_variables_initializer().run()
            # Create a saver
            if self.checkpoint == True:
                saver = tf.train.Saver()
            # eif
            x, y = self.inputs()
            total_loss = self.loss(x, y)
            train_op = self.train(total_loss)
            coord = tf.train.Coordinator()
            threads = tf.train.start_queue_runners(sess, coord)
            training_steps = self.totalTrainingSteps
            initial_step = 0
            if self.checkpoint == True:
                # Verify we don't have a checkpoint saved already
                ckpt = tf.train.get_checkpoint_state(os.path.dirname(__file__) + "/checkpoints/")
                if ckpt and ckpt.model_checkpoint_path:
                    # Restores from checkpoint
                    saver.restore(sess, ckpt.model_checkpoint_path)
                    initial_step = int(ckpt.model_checkpoint_path.rsplit('-', 1)[1])
                # eif
            # eif
            # Training loop
            if self.checkpoint == True:
                for step in range(initial_step, training_steps):
                    sess.run([train_op])
                    if step % self.logPrint == 0:
                        print ("Loss: ", sess.run([total_loss]))
                    # eif
                    if step % 1000 == 0:
                        saver.save(sess, './checkpoints/eod-model', global_step=step)
                    # eif
                # efl
            else:
                for step in range(training_steps):
                    sess.run([train_op])
                    if step % self.logPrint == 0:
                        print ("Loss: ", sess.run([total_loss]))
                    # eif
                # efl
            # eif
            self.evaluate(sess, self.loadTensorData.validate, self.loadTensorData.validateAnswers)
       # ewith
    # edef
# eclass
                             
                            
                            
                                Not too bad for a logistic regression model. As usual let's go over the class variables first. We'll ignore the variables that were covered in our review of the
                                constructor arguments.
                            
                            
                            
                                - w: The tensor containing our weights.
 
                                - b: The tensor containing our bias.
 
                                - dataModelColCount: The number of columns in our data model. Based on the data in DataRow2Tensor entry used by this class.
 
                                - totalTrainingSteps: The total number of training steps is set to the number of rows in the training set times the trainStepsMultiplier
 
                            
                            
                            
                                Nothing too crazy right. All the arguments make sense we're defining some data driven values that control how our class operates its logistic regression model.
                                Next up let's review the methods in this class and then start diving into some code.
                            
                            
                            
                                - combine_inputs: Combines the input tensor in the same way that an inference was performed in out linear regression model. The output of this method will be the input of a sigmoid method to determine our yes/no inference.
 
                                - inference: Defines the operations necessary to create our inference formula, this is the formula that defines our guess.
 
                                - loss: Defines the operations necessary to calulate the loss associated with the current weights and bias.
 
                                - inputs: This is a simple method that returns our input data, this will be the training and training answer data stored in our local instance of LoadTensorData class.
 
                                - train: Defines the operations needed to train the model.
 
                                - evaluate: Defines the operations needed to evaluate the accuracy of the model.
 
                            
                            
                            
                                Let's take a look at the inference method. This method contains the set of operations to create our inference formula. That is given an input X and our current
                                value of weights and biases we generate an output, Y_predicted.
                            
                            
                            
def combine_inputs(self, x):
    return tf.matmul(x, self.w) + self.b
# edef
def inference(self, x):
    # Compute inference model over data x and return the result.
    return tf.nn.softmax(self.combine_inputs(x))
# edef
                             
                            
                            
                                Let's take a look at how inference is used, it'll make more sense that way. Next up the loss function where we compare our inferred
                                answer to our actual answer.
                            
                            
                            
def loss(self, x, y):
    # Compute loss over training data x and expected outputs y.
    return tf.reduce_mean(-tf.reduce_sum(y * tf.log(self.inference(x)), reduction_indices=1))
# edef
                             
                            
                            
                                The loss method for a logistic regression model works a little differently than our linear regression model.
                                Our combine_inputs method essentially provides the value we are trying to optimize, the inference method in this
                                network is one that provides the sigmoid result for the combined inputs. You'll also notice that we're using a different loss method.
                                In this model we use sigmoid_cross_entropy_with_logits to compute the loss.
                                Next up inputs and train methods will be reviewed.
                                Let's take a look at them.
                            
                            
                            
def inputs(self):
    # Read/generate input training data x and expected outputs y.
    return self.loadTensorData.train, self.loadTensorData.trainAnswers
# edef
def train(self, totalLoss):
    return tf.train.GradientDescentOptimizer(self.learning_rate).minimize(totalLoss)
# edef
                             
                            
                            
                                I know, I know tons of code to review. Well let's dig in. The inputs method is very simple it just provides an abstraction layer for us to pull in tensor data
                                used in our model. You can see that we're pulling information from our instance of the LoadTensorData class. Our train is also very simple but it actually does a lot.
                                This method uses tensor flows train method to minimize the error in our neural network by incrementally trying to find the lowest point on the error curve. We do this buy slowly
                                walking the value of our weights based on the slop of our error curve until the best fit to a minimum is found. This technique is what enables us to optimize a complex neural network
                                efficiently. It can also cause strange behavior like over shooting and jumping back and forth. We won't cover those issues in this tutorial but you can look up those types of
                                neural network errors so that you can be better prepared to address them.
                            
                            
                            
                                We're going to take a look at our evaluate method next. This method is used after training to compare the results of our neural network to the known answers we loaded
                                in our LoadTensorData class.
                            
                            
                            
def evaluate(self, sess, test_x, test_y):
    Y_predicted = tf.equal(tf.argmax(self.inference(test_x), 1), tf.argmax(test_y, 1))
    mse = tf.reduce_mean(tf.cast(Y_predicted, tf.float32))
    print('Accuracy: %.4f' % sess.run(mse))
# edef
                             
                            
                            
                                Well let's think about what we're doing in this method. We are not evaluating the accuracy of our trained model against our evaluation data.
                                In order to do so we need to generate a set of predicted answers, Y_predicted takes care of that for us, and we get it again with a call to our
                                inference method except we pass in validation data now and not training data.
                                Once we have our predicted answers we then calculate the loss, similar to how we did it during our training step.
                                You'll notice a special section at the end of the evaluate method that lets us run custom evaluation code.
                                This will come in handy when we want to check values specific to a certain data set.
                            
                            
                            
                                Alright now, we're almost done. Soon we'll be doing some test runs to check our model's performance. But first we need to review the startTraining
                                method. This is the most complex part of this class but you've seen all the supporting methods so you have an idea what we're doing. Let's look at the code.
                            
                            
                            
def startTraining(self):
    self.totalTrainingSteps = int(self.loadTensorData.trainCount * self.trainStepsMultiplier)
    print('Found training steps: %i' % self.totalTrainingSteps)
    with tf.Session() as sess:
        print('Found tensor dimension: %ix%i' % (self.dataModelColCount, self.dataModelAnsColCount))
        if self.randomSeed == True:
            self.w = tf.Variable(tf.random_normal([self.dataModelColCount, self.dataModelAnsColCount], stddev=0.5), name='weights')
            self.b = tf.Variable(tf.random_normal([self.dataModelAnsColCount], stddev=0.5), name='bias')
        else:
            self.w = tf.Variable(tf.zeros([self.dataModelColCount, self.dataModelAnsColCount]), name='weights')
            self.b = tf.Variable(tf.zeros([self.dataModelAnsColCount]), name='bias')
        # eif
        print("Weight: %s" % self.w.get_shape())
        print("Bias: %s" % self.b.get_shape())
        # Model setup
        tf.global_variables_initializer().run()
        # Create a saver
        if self.checkpoint == True:
            saver = tf.train.Saver()
        # eif
        x, y = self.inputs()
        total_loss = self.loss(x, y)
        train_op = self.train(total_loss)
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess, coord)
        training_steps = self.totalTrainingSteps
        initial_step = 0
        if self.checkpoint == True:
            # Verify we don't have a checkpoint saved already
            ckpt = tf.train.get_checkpoint_state(os.path.dirname(__file__) + "/checkpoints/")
            if ckpt and ckpt.model_checkpoint_path:
                # Restores from checkpoint
                saver.restore(sess, ckpt.model_checkpoint_path)
                initial_step = int(ckpt.model_checkpoint_path.rsplit('-', 1)[1])
            # eif
        # eif
        # Training loop
        if self.checkpoint == True:
            for step in range(initial_step, training_steps):
                sess.run([train_op])
                if step % self.logPrint == 0:
                    print ("Loss: ", sess.run([total_loss]))
                # eif
                if step % 1000 == 0:
                    saver.save(sess, './checkpoints/eod-model', global_step=step)
                # eif
            # efl
        else:
            for step in range(training_steps):
                sess.run([train_op])
                if step % self.logPrint == 0:
                    print ("Loss: ", sess.run([total_loss]))
                # eif
            # efl
        # eif
        self.evaluate(sess, self.loadTensorData.validate, self.loadTensorData.validateAnswers)
    # ewith
# edef
                             
                            
                            
                                The method start by calculating the desired training step count. We use the size of the data set and then multiply that by a factor to create a larger or smaller
                                total number of training steps to run. We set the tensor flow session to use during our training with this line of code, with tf.Session() as sess.
                                The ensures that all tensor flow operations are run on the same session and that the session is released once we exit the with block.
                                Let's take a look at the first few lines of our training method.
                            
                            
                            
print('Found tensor dimension: %ix%i' % (self.dataModelColCount, self.dataModelAnsColCount))
if self.randomSeed == True:
    self.w = tf.Variable(tf.random_normal([self.dataModelColCount, self.dataModelAnsColCount], stddev=0.5), name='weights')
    self.b = tf.Variable(tf.random_normal([self.dataModelAnsColCount], stddev=0.5), name='bias')
else:
    self.w = tf.Variable(tf.zeros([self.dataModelColCount, self.dataModelAnsColCount]), name='weights')
    self.b = tf.Variable(tf.zeros([self.dataModelAnsColCount]), name='bias')
# eif
print("Weight: %s" % self.w.get_shape())
print("Bias: %s" % self.b.get_shape())
# Model setup
tf.global_variables_initializer().run()
# Create a saver
if self.checkpoint == True:
    saver = tf.train.Saver()
# eif
                             
                            
                            
                                As a quick sanity check we print out the dimension of our training tensor by printing out the number of columns loaded
                                from the column data map. The next piece of code determines if we intialize our weight and bias tensors with initial values or just zeros.
                                Notice that our weight tensor shape is determined by the columns we're including in our data, training, and validation tensors.
                                The bias tensor is a single dimension and has a value for each column in our data. After we setup the shape of our weight and bias tensors
                                we run the tensor flow variable initialization call. If checkpoints are enabled we initialize a new instance of the saver class.
                            
                            
                            
x, y = self.inputs()
total_loss = self.loss(x, y)
train_op = self.train(total_loss)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess, coord)
training_steps = self.totalTrainingSteps
initial_step = 0
if self.checkpoint == True:
    # Verify we don't have a checkpoint saved already
    ckpt = tf.train.get_checkpoint_state(os.path.dirname(__file__) + "/checkpoints/")
    if ckpt and ckpt.model_checkpoint_path:
        # Restores from checkpoint
        saver.restore(sess, ckpt.model_checkpoint_path)
        initial_step = int(ckpt.model_checkpoint_path.rsplit('-', 1)[1])
    # eif
# eif
                             
                            
                            
                                In the next few lines of code we initialize our input variables x and y with our input training data by calling out local
                                loss method. We then store a call to our training method in the train_op variable. We also store a local instance of
                                a training coordinator. The training coordinator works with tensorflow's threaded training via the start_queue_runners method.
                                We also store a local copy of the number of total training steps and set our training step tracking variable to zero.
                            
                            
                            
                                Next up if the checkpoint boolean is set we setup our checkpoint tracker by setting the directory where the checkpoint files
                                will be stored. If there are checkpoint files in the target checkpoint directory then we load the latest training checkpoint
                                and we also set the current training step. The next block of code will run our actual training loop and then call our evaluate
                                method.
                            
                            
                            
# Training loop
if self.checkpoint == True:
    for step in range(initial_step, training_steps):
        sess.run([train_op])
        if step % self.logPrint == 0:
            print ("Loss: ", sess.run([total_loss]))
        # eif
        if step % 1000 == 0:
            saver.save(sess, './checkpoints/eod-model', global_step=step)
        # eif
    # efl
else:
    for step in range(training_steps):
        sess.run([train_op])
        if step % self.logPrint == 0:
            print ("Loss: ", sess.run([total_loss]))
        # eif
    # efl
# eif
self.evaluate(sess, self.loadTensorData.validate, self.loadTensorData.validateAnswers)
                             
                            
                            
                                If our checkpoint flag is set to true we start our training loop at the initial_step that was loaded from our
                                checkpoint file. The inner part of the loops are essentially the same. For each training step we call the train_op
                                operations which in turn calls the loss method and passed into our train method. If our loop iteration
                                makes it to an index such that step % self.logPrint == 0, then we print out the loss we've calculated up to that point.
                                The only difference in the checkpoint enabled loop is that after each 1000 training steps we save a checkpoint file to
                                track our training progress. And last but not least we call our validate method which generates a mean squared error
                                against the validation set of data. Any special validation steps are handled by the local variable evalType.
                            
                            
                            
                                So now that we have covered all of the code running our linear regression tensorflow model let's actually run it!
                                Uncomment the following line run(exes["goog_lin_reg_avg100day"]);, and comment this one run(exes["weight_age_lin_reg"]);.
                                Now run Main.py and you should see output similar to the output depicted below. This execution loads all the stock price data we have
                                for the SPY exchange traded fund, a fund that tracks the S&P 500. We're trying to predict the closing price buy looking at the patterns created by the
                                closing price, the opening price, and the simple 100 day moving average.
                            
                            
                            
Application Version: 0.4.0.6
Found loader: load_csv_data
Loading Data: ./data/ivv.csv.xls Type: csv Version: 1.0 Reset: False
Loaded 3130 rows from this data file.
CleanCount: 0 RowCount: 3129 RowsFound: 3129
Found feature type: goog_log_reg
Generating Feature Data: Type: goog_log_reg
Loaded 3129 rows from this data file.
Cleaning row data...
CleanCount: 18 RowCount: 3111 RowsFound: 3111
Generating Tensor Data:
TensorRow Answer Shape: (3111, 2)
TensorRow Data Shape: (3111, 2)
TensorRow Count: 3111
TensorTrain Answer Shape: (2488, 2)
TensorTrain Data Shape: (2488, 2)
TensorTrain Count: 2488
TensorValidate Answer Shape: (621, 2)
2017-08-10 18:37:13.222037: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
TensorValidate Data Shape: (621, 2)
TensorValidate Count: 621
2017-08-10 18:37:13.222052: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
Found training steps: 2488
Found tensor dimension: 2x2
2017-08-10 18:37:13.222058: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Weight: (2, 2)
Bias: (2,)
('Loss: ', [2.0792916])
('Loss: ', [2.0784445])
('Loss: ', [2.0784395])
('Loss: ', [2.0784333])
('Loss: ', [2.0784285])
('Loss: ', [2.0784237])
('Loss: ', [2.0784178])
('Loss: ', [2.0784118])
('Loss: ', [2.0784073])
('Loss: ', [2.0784023])
('Loss: ', [2.0783966])
('Loss: ', [2.0783913])
('Loss: ', [2.0783861])
('Loss: ', [2.0783808])
('Loss: ', [2.0783753])
('Loss: ', [2.0783701])
('Loss: ', [2.0783644])
('Loss: ', [2.0783589])
('Loss: ', [2.0783539])
('Loss: ', [2.0783484])
('Loss: ', [2.0783429])
('Loss: ', [2.0783372])
('Loss: ', [2.0783319])
('Loss: ', [2.0783272])
('Loss: ', [2.078321])
('Loss: ', [2.0783157])
('Loss: ', [2.0783095])
('Loss: ', [2.078305])
('Loss: ', [2.0783])
('Loss: ', [2.078295])
('Loss: ', [2.0782897])
('Loss: ', [2.078284])
('Loss: ', [2.0782781])
('Loss: ', [2.0782731])
('Loss: ', [2.0782676])
('Loss: ', [2.0782623])
('Loss: ', [2.0782566])
('Loss: ', [2.0782521])
('Loss: ', [2.0782464])
('Loss: ', [2.0782411])
('Loss: ', [2.0782354])
('Loss: ', [2.0782299])
('Loss: ', [2.0782251])
('Loss: ', [2.0782197])
('Loss: ', [2.0782151])
('Loss: ', [2.0782101])
('Loss: ', [2.0782032])
('Loss: ', [2.0781982])
('Loss: ', [2.0781929])
('Loss: ', [2.0781877])
Accuracy: 0.5459
                             
                            
                            
                                Congrats you've made it to the end of the tensorflow logistic regression tutorial series. Take some time to make adjustments to the execution
                                configuration dictionary and see how it effects the output of your program. Try putting in a very large learning rate, try putting in very very small
                                learning rates? Did the logistic regression diverge? Play around with the settings and have some fun!