DockerでFlaskとRedisを動かす

書くこと

  • Docker公式のtutorialに従う。
  • シンプルなFlask appをDockerで動かす。
  • FlaskからRedisに接続する。
  • docker-composeを使う。

フォルダの構成

├── Dockerfile
├── app.py
├── docker-compose.yml
└── requirements.txt

app.py

from flask import Flask
from redis import Redis

app = Flask(__name__)
redis = Redis(host='redis', port=6379)


@app.route('/')
def hello():
    count = redis.incr('hits')
    return 'Hello from Docker! I have been seen {} times.\n'.format(count)


if __name__ == '__main__':
    app.run(host='0.0.0.0', debug=True)

requirements.txt

Flask==0.12.2
redis==2.10.6

Dockerfile

FROM python:3.6.2
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

docker-compose.yml

version: '3'
services:
  web:
    build: ./
    ports:
     - "5000:5000"
    volumes:
     - .:/code
  redis:
    image: "redis"

動作確認

サーバーの起動

docker-compose up

サイトへアクセス

  • IPの確認
$ docker-machine ip
192.168.99.100

動作

  • アクセスするたびにカウントアップされる。
    • "Hello from Docker! I have been seen 5 times."

論文メモ Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Ramesh Nallapati et al 2016

autoscale: true slidenumbers: true

Notes about

“Abstractive TextSummarization using Sequence-to-sequence RNNs and Beyond”


Introduction

Authors

  • apply the attentional encoder-decoder RNN to text summarization.
    • The model is developed for machine translation.
  • propose the novel models and show that they provide additional improvement in performance.
    • The models use additional linguistic features such as parts-of-speech and so on.

Text summarization

  • Abstractive text summarization is the task of generating a headline or short summary consisting of a few sentences that captures the salient ideas of an article or a passage.

The difficulties of this kind of problems

  • In summarization, original documents are compressed in lossy manner such that key concepts are preserved.

Models: RNN language model (preparation)

  • (Extraction of embedding vector) $$\bar{y}_t = E y_{t-1}$$
  • (Calculation of hidden layer) $$h_t = \tanh \left( W^{(l)} [\bar{y}_t, h_{t-1}]^t + b^{(l)} \right)$$
  • (Calculation of output layrer) $$o_t = W^{(o)} h_t + b^{(o)}$$
  • (Calculation of probability) $$p_t = \text{softmax}(o_t)$$
  • (Extraction of probability) $$P(y_t| Y_{<t}) = p_t \cdot y_t$$

Models: bidirectional RNN language model (preparation)

  • (Forward calculation of hidden layer)
    • $$\overrightarrow{h}_t = \tanh \left( \overrightarrow{W}^{(l)} [\bar{y}_t, \overrightarrow{h}_{t-1}]^t + \overrightarrow{b}^{(l)} \right)$$
  • (Backward calculation of hidden layer)
    • $$\overleftarrow{h}_t = \tanh \left( \overleftarrow{W}^{(l)} [\bar{y}_t, \overleftarrow{h}_{t+1}]^t + \overleftarrow{b}^{(l)} \right)$$
  • (Calculation of output layrer)
    • $$o_t = W^{(o)} [\overrightarrow{h}_t, \overleftarrow{h}_t]^t + b^{(o)}$$

Models: encoder-decoder RNN language model (preparation)

  • (Input) $$X=(x_i)_ {i=1}^I$$
  • (Output) $$Y=(y_j)_ {j=0}^{J+1}$$ where $$y_0=BOS$$ and $$y_{J+1}=EOS$$
  • (Encoder) $$z=\Lambda(X)$$
  • (Hidden layer) $$h_t = \phi(h_{j-1}, y_{j-1})$$ where $$h_0 = z$$
  • (Probability)
    • $$P(Y|X) = \prod_{j=1}^{J+1} P(y_j|Y_{<j}, X)$$
    • $$P(y_j|Y_{<j}, X) = \psi(h_j, y_j)$$

Models: Soft attention mechanism (preparation)

  • (Encoder) $$h_i^{(s)} = \phi^{(s)}(x_i, h_{i-1}^{(s)})$$
  • (Decoder) $$h_j^{(t)} = \phi^{(t)}(y_j, h_{j-1}^{(t)})$$
  • (Attention)
    • $$\bar{h} = \sum_{i=1}^I a_ih_i^{(s)}$$ where $$a_i = \frac{\exp(e_i)}{\sum_{\tilde{i}=1}^I \exp(e_{\tilde{i}})}$$ and $$e_i = \Omega(h_i^{(s)}, h_j^{(t)})$$
    • $$\hat{h}_j^{(t)} = \tanh(W^{(a)}[\bar{h}, h_j^{(t)}])$$

Models: Large vocabulary trick (preparation)

  • Large vocabulary problems

    • The computation cost of $$\text{softmax}(o_t)$$ is very high, when the vocabulary size is large, because $$\dim(o_t)$$ equals to the vocabulary size.
  • Approaches

    • model-specific approaches
      • noise-contrastive estimation
      • binary hierarchical softmax
    • translation-specific approaches

Models: Authors apply encoder-decoder RNN with attention and large vocabulary trick(LVT)

  • The encoder-decoder RNN with attention is described in preparation slides.
  • Authors' LVT is that
    • the decoder-vocabulary of each mini-batch is restricted to words in the source documents of that batch.
    • the most frequent words in the target dictionary are added until the vocabulary reaches a fixes size.

Models: Authors propose the model uses the additional linguistic features.

  • Linguistic features
    • parts-of-speech tags (POS tag)
    • named-entity tags
      • MUC: organization, person, location, date, time, money, percent
    • TF (Term Frequency)
      • $$ tf_{i, j} = \frac{n_{i, j}}{\sum_k n_{k, j}}$$
      • $$n_{i, j}$$ is the number of times that term $$t_i$$ occurs in document $$d_j$$.
    • IDF (Inverse Document Frequency)
      • $$ idf_{i}=\log \frac{|D|}{{d: t_i \in d}} $$

Models: Authors propose modeling rare/unseen words using switching generator-pointer.

  • The algorithm is needed to handle unseen words, because the vocabulary is limited at training time.
  • In general, a most common way is to emit an ‘UNK’ token as a placeholder.
  • In summarization, an intuitive way is to simply point to their location in the source document.
  • Authors model this notion using our novel switching decoder/pointer architecture.
    • The switch decides between using the generator or a pointer at every time step.

Models: The switch

  • The probability of the switch turning on at the i-th time-step is
    • $$P(s_i = 1) = \sigma(vs \cdot (W_hs h_i + W_es E o_{i-1} + W_cs c_i + bs))$$
    • where $$E o_{i-1}$$ is the embedding vector of the emission from the previous time step,
    • $$c_i$$ is the attention-weighted context vector.
  • The pointer value at i-th time-step is
    • $$p_i = \arg \max_{j} (Pa_i(j))$$ for $$j \in {1, …, N_d}$$
    • where $$Pa_i(j) = \exp(va \cdot (W_ha h_{i-1} + Wa_e E o_{i-1} + W_ca hd_{j} + ba))$$

Models: The training of the switch parameters.

  • At the training time, the explicit pointer information is provides whenever the summary word does not exist in the target vocabulary.
  • The loss function is
    • $$\log P(y|x) = \sum_{i} (g_i \log {P(y_i|y_{-i}, x) P(s_i)}$$ $$ + (1-g_i) \log {P(p(i)|y_{-i}, x)(1-P(s_i))})$$
  • $$g_i$$ is an indicator function that is set to 0 whenever the word at position $$i$$ in the summary is OOV with respect to the decoder vocabulary.

Models: Capturing hierarchical document structure with hierarchical attention

  • In summarization, it is important to identify the key sentences from which the summary can be drawn.
  • This model aims to capture this notation of two levels of importance using bi-directional RNNs on the source sids, one at the word level and the other at the sentence level.
  • $$Pa(j) = \frac{Pa_w(j) Pa_s(s(j))}{\sum_{k=1}^{N_d} Pa_w(k) Pa_s(s(k))}$$

pandas.groupbyでaggに自作関数を渡す、要素にnumpy.arrayを使う。

import pandas as pd
import numpy as np


def g(x):
    y = None
    for i, a in x.iteritems():
        if y is None:
            y = a.copy()
        else:
            y += a
    return Foo(y)


class Foo:
    def __init__(self, x):
        self.x = x

    def __str__(self):
        return str(self.x)


if __name__ == '__main__':
    df = pd.DataFrame({'a': [1, 1, 2], 'b': [np.array([1, 2]), np.array([1, 2]), np.array([1, 2])]})
    x = df.groupby('a').agg({'b': g})
    print(repr(x))
    print(repr(df))

sklearn.ensemble.AdaBoostClassifierの使い方

公式ドキュメント

パラメータ

  • base_estimator=None,
  • n_estimators=50,
  • learning_rate=1.0,
  • algorithm=‘SAMME.R’,
  • random_state=None

GridSearchを使ったbase estimatorのパラメータ調整

  • 下記のようにkeyの前にbase_estimator__をつけたら良い。
param_grid = {'base_estimator__max_depth': [4, 5, 6, None], 'base_estimator__max_features': [2, None],
              'base_estimator__min_samples_split': [2, 8, 16, 32],
              'base_estimator__min_samples_leaf': [2, 8, 16, 32], 'base_estimator__max_leaf_nodes': [50, None],
              'learning_rate': [0.5, 1, 4]}

sklearn.svm.SVCの使い方

公式ドキュメント

パラメータ

  • C=1.0
  • kernel=‘rbf’
  • degree=3
  • gamma=‘auto’
  • coef0=0.0
  • shrinking=True(調査中)
  • probability=False
  • tol=0.001
  • cache_size=200(調査中)
  • class_weight=None
  • verbose=FalseC
  • max_iter=-1
  • decision_function_shape=None(調査中)
  • random_state=None

パラメータを変えて様子をみる。

サンプルデータ

f:id:nsb248:20170224232501p:plain

デフォルトのまま

  • accuracy: 0.635
  • std: 0.362 f:id:nsb248:20170224221555p:plain

C

  • Cの意味を表した式はこちらを参照
  • Cは学習データに対する分類の正しさとモデルの複雑さのトレードオフをコントロールする。
  • Cが大きいと分類の正しさを優先し、モデルが複雑になる(モデルパラメータのノルム-2が大きくなる)。
  • 分類(C=10, kernel=‘rbf’) f:id:nsb248:20170224232758p:plain
  • 分類(C=1000) f:id:nsb248:20170224232817p:plain

kernel

  • 取りうる値:

    • ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable
  • 分類(kernel=‘linear’) f:id:nsb248:20170224233545p:plain

  • 分類(kernel=‘poly’) f:id:nsb248:20170224233604p:plain

  • 分類(kernel=‘rbf’) f:id:nsb248:20170224233616p:plain

  • 分類(kernel=‘sigmoid’) f:id:nsb248:20170224233636p:plain

  • rbf以外上手く分類できなかった。原因を調査中。

degree

  • kernel=‘poly'のときのみ使用できる。
  • polynomialの次数を指定する。

gamma

  • kernel=‘rbf’, ‘poly’, ‘sigmoid’のとき使用できる。
  • gammaの意味を表した式はこちらを参照
  • 分類(kernel=‘rbf’, gamma=0.1) f:id:nsb248:20170224235706p:plain
  • 分類(kernel=‘rbf’, gamma=2.0) f:id:nsb248:20170224235739p:plain

coef0

  • kernelのindependent term。kernel=‘poly’, ‘sigmoid'のとき重要。

RandomForestClassifierの使い方

公式ドキュメント

パラメータ

  • DecisionTreeのアンサンブル学習なので多くはDecisionTreeと同じ。こちらを参照

    特有のパラメータ

  • n_estimators
  • bootstrap
  • oob_score
  • n_jobs
  • verbose
  • warm_start(調査中)

パラメータを変えて様子をみる。

n_estimators

  • 他のパラメータはデフォルトのまま。つまり、汎化能力は低いdecision treeの組み合わになる。
  • AUCの推移 f:id:nsb248:20170224185532p:plain
  • estimatorの数が1のときに比べ、数個のestimatorを加えるだけでかなり精度が上がっている。

bootstrap

  • ツリー構築時に学習データからbootstrapをするかのフラグ。bootstrapをすることで性能が向上する。相関が減るからだっけ?(確認中)
  • AUCの推移
    • 赤がTrue、青がFalse f:id:nsb248:20170224192550p:plain

verbose

  • 途中のツリー構築処理のログを出力してくれる。
  (prop.get_family(), self.defaultFamily[fontext]))
[Parallel(n_jobs=1)]: Done  10 out of  10 | elapsed:    0.0s finished
[Parallel(n_jobs=1)]: Done  10 out of  10 | elapsed:    0.0s finished
[Parallel(n_jobs=1)]: Done  10 out of  10 | elapsed:    0.0s finished

DecisionTreeClassifierの限界

これまで

限界を探る。

関係のないデータを混ぜる。

  • x2というデータを入れる。
  • targetはx2に全く依存していない。

    結果

  • accuracy: 0.810
  • std: 0.100
  • 分岐ツリー f:id:nsb248:20170224174722p:plain
  • 重要度
{'x2': 0.24604190914667379, 'x1': 0.45724873019357137, 'x0': 0.29670936065975489}
  • かなり精度が悪化した。
  • 関係ないはずのx2の重要度が0.25もある。

データを45度回転させる。

結果

  • accuracy: 0.835
  • std: 0.037
  • 分類 f:id:nsb248:20170224180953p:plain
  • 分岐ツリー f:id:nsb248:20170224181001p:plain
  • 多少ではあるが精度が悪化している。
  • 分類結果を見ると境界が直線であってほしいにも関わらず、階段状になってしまう。理論上仕方ないが。