Text Classification using TensorFlow/Keras on AI Platform

This notebook illustrates:

  1. Creating datasets for AI Platform using BigQuery
  2. Creating a text classification model using the Estimator API with a Keras model
  3. Training on Cloud ML Engine
  4. Deploying the model
  5. Predicting with model
  6. Rerun with pre-trained embedding
In [ ]:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
In [35]:
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.14'

if 'COLAB_GPU' in os.environ:  # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached
  from google.colab import auth
  # download "sidecar files" since on Colab, this notebook will be on Drive
  !rm -rf txtclsmodel
  !git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst
  !mv  training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .
  !rm -rf training-data-analyst
  # downgrade TensorFlow to the version this notebook has been tested with
  !pip install --upgrade tensorflow==$TFVERSION
In [ ]:
import tensorflow as tf

We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.

We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.

Creating Dataset from BigQuery

Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.

Here is a sample of the dataset:

In [ ]:
%load_ext google.cloud.bigquery
In [ ]:
%%bigquery --project $PROJECT
  url, title, score
  LENGTH(title) > 10
  AND score > 10
  AND LENGTH(url) > 0

Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with nytimes

In [ ]:
%%bigquery --project $PROJECT
  ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
  COUNT(title) AS num_articles
  REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
  AND LENGTH(title) > 10
ORDER BY num_articles DESC

Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform.

In [ ]:
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)

SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM
    ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
    REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
    AND LENGTH(title) > 10
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')

df = bq.query(query + " LIMIT 5").to_dataframe()

For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).

A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).

In [ ]:
traindf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").to_dataframe()
evaldf  = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").to_dataframe()

Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.

We can also see that within each dataset, the classes are roughly balanced.

In [ ]:
In [ ]:

Finally we will save our data, which is currently in-memory, to disk.

In [ ]:
import os, shutil
shutil.rmtree(DATADIR, ignore_errors=True)
traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
In [ ]:
!head -3 data/txtcls/train.tsv
In [ ]:
!wc -l data/txtcls/*.tsv

TensorFlow/Keras Code

Please explore the code in this directory: model.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job.

In particular look for the following:

  1. tf.keras.preprocessing.text.Tokenizer.fit_on_texts() to generate a mapping from our word vocabulary to integers
  2. tf.keras.preprocessing.text.Tokenizer.texts_to_sequences() to encode our sentences into a sequence of their respective word-integers
  3. tf.keras.preprocessing.sequence.pad_sequences() to pad all sequences to be the same length

The embedding layer in the keras model takes care of one-hot encoding these integers and learning a dense emedding represetation from them.

Finally we pass the embedded text representation through a CNN model pictured below

Run Locally (optional step)

Let's make sure the code compiles by running locally for a fraction of an epoch. This may not work if you don't have all the packages installed locally for gcloud (such as in Colab). This is an optional step; move on to training on the cloud.

In [ ]:
pip install google-cloud-storage
rm -rf txtcls_trained
gcloud ai-platform local train \
   --module-name=trainer.task \
   --package-path=${PWD}/txtclsmodel/trainer \
   -- \
   --output_dir=${PWD}/txtcls_trained \
   --train_data_path=${PWD}/data/txtcls/train.tsv \
   --eval_data_path=${PWD}/data/txtcls/eval.tsv \

Train on the Cloud

Let's first copy our training data to the cloud:

In [ ]:
gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/
In [ ]:
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
 --region=$REGION \
 --module-name=trainer.task \
 --package-path=${PWD}/txtclsmodel/trainer \
 --job-dir=$OUTDIR \
 --scale-tier=BASIC_GPU \
 --runtime-version=$TFVERSION \
 -- \
 --output_dir=$OUTDIR \
 --train_data_path=gs://${BUCKET}/txtcls/train.tsv \
 --eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \

Change the job name appropriately. View the job in the console, and wait until the job is complete.

In [ ]:
!gcloud ai-platform jobs describe txtcls_190209_224828


What accuracy did you get? You should see around 80%.

Deploy trained model

Once your training completes you will see your exported models in the output directory specified in Google Cloud Storage.

You should see one model for each training checkpoint (default is every 1000 steps).

In [ ]:
gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/

We will take the last export and deploy it as a REST API using Google AI Platform

In [ ]:
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/ | tail -1)
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} --quiet
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION

Get Predictions

Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.

In [ ]:
  'Uber shuts down self-driving trucks unit',
  'Grover raises €37M Series A to offer latest tech products as a subscription',
  'Tech companies can now bid on the Pentagon’s $10B cloud contract'
  '‘Lopping,’ ‘Tips’ and the ‘Z-List’: Bias Lawsuit Explores Harvard’s Admissions',
  'A $3B Plan to Turn Hoover Dam into a Giant Battery',
  'A MeToo Reckoning in China’s Workplace Amid Wave of Accusations'
  'Show HN: Moon – 3kb JavaScript UI compiler',
  'Show HN: Hello, a CLI tool for managing social media',
  'Firefox Nightly added support for time-travel debugging'

Our serving input function expects the already tokenized representations of the headlines, so we do that pre-processing in the code before calling the REST API.

Note: Ideally we would do these transformation in the tensorflow graph directly instead of relying on separate client pre-processing code (see: training-serving skew), howevever the pre-processing functions we're using are python functions so cannot be embedded in a tensorflow graph.

See the text_classification_native notebook for a solution to this.

In [ ]:
import pickle
from tensorflow.python.keras.preprocessing import sequence
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json

requests = techcrunch+nytimes+github

# Tokenize and pad sentences using same mapping used in the deployed model
tokenizer = pickle.load( open( "txtclsmodel/tokenizer.pickled", "rb" ) )

requests_tokenized = tokenizer.texts_to_sequences(requests)
requests_tokenized = sequence.pad_sequences(requests_tokenized,maxlen=50)

# JSON format the requests
request_data = {'instances':requests_tokenized.tolist()}

# Authenticate and call CMLE prediction API 
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,

parent = 'projects/%s/models/%s' % (PROJECT, 'txtcls') #version is not specified so uses default
response = api.projects().predict(body=request_data, name=parent).execute()

# Format and print response
for i in range(len(requests)):
  print(' github    : {}'.format(response['predictions'][i]['dense'][0]))
  print(' nytimes   : {}'.format(response['predictions'][i]['dense'][1]))
  print(' techcrunch: {}'.format(response['predictions'][i]['dense'][2]))

How many of your predictions were correct?

Rerun with Pre-trained Embedding

In the previous model we trained our word embedding from scratch. Often times we get better performance and/or converge faster by leveraging a pre-trained embedding. This is a similar concept to transfer learning during image classification.

We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.

You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/

You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed.

In [36]:
!gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/
Copying gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt [Content-Type=text/plain]...
- [1 files][661.3 MiB/661.3 MiB]      0.0 B/s                                   
Operation completed over 1 objects/661.3 MiB.                                    

Once the embedding is downloaded re-run your cloud training job with the added command line argument:


Be sure to change your OUTDIR so it doesn't overwrite the previous model.

While the final accuracy may not change significantly, you should notice the model is able to converge to it much more quickly because it no longer has to learn an embedding from scratch.


Next step

Client-side tokenizing in Python is hugely problematic. See Text classification with native serving for how to carry out the preprocessing in the serving function itself.

Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License