Tensor board
1. TensorBoard
TensorBoard: tensorflow가 포함하고 있는 graph visualization 소프트웨어
입, 출력, 모델 함수, 구조 파악 및 디버깅에 많이 사용
2. Example
2 layer XOR 소스코드
import tensorflow as tf import numpy as np tf.set_random_seed(777) x_data = np.array([[0,0], [0,1], [1,0], [1,1]], dtype=np.float32) y_data = np.array([[0], [1], [1], [0]], dtype=np.float32) X = tf.placeholder(tf.float32, [None, 2], name='x_input') Y = tf.placeholder(tf.float32, [None, 1], name='y_input') with tf.name_scope('layer1') as scope: W1 = tf.Variable(tf.random_normal([2, 2]), name='weight1') b1 = tf.Variable(tf.random_normal([2]), name='bias1') layer1 = tf.sigmoid(tf.matmul(X, W1) + b1) w1_hist = tf.summary.histogram('weight1', W1) b1_hist = tf.summary.histogram('bias1', b1) layer1_hist = tf.summary.histogram('layer1', layer1) with tf.name_scope('layer2') as scope: W2 = tf.Variable(tf.random_normal([2, 1]), name='weight2') b2 = tf.Variable(tf.random_normal([1]), name='bias2') hypothesis = tf.sigmoid(tf.matmul(layer1, W2) + b2) w2_hist = tf.summary.histogram('weight2', W2) b2_hist = tf.summary.histogram('bias2', b2) hypothesis_hist = tf.summary.histogram('hypothesis', hypothesis) # cost/loss function with tf.name_scope('cost') as scope: cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) cost_summ = tf.summary.scalar('cost', cost) with tf.name_scope('train') as scope: train = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost) predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) accuracy_summ = tf.summary.scalar('accuracy', accuracy) # lanunch graph with tf.Session() as sess: # TensorBoard merged_summary = tf.summary.merge_all() writer = tf.summary.FileWriter('Logs/xor_nn') writer.add_graph(sess.graph) # Initialize TensorFlow variable sess.run(tf.global_variables_initializer()) for step in range(10001): summary, _ = sess.run([merged_summary, train], feed_dict={X: x_data, Y: y_data}) writer.add_summary(summary, global_step=step) if step % 100 == 0: print("Step: ", step, "Cost: ", sess.run(cost, feed_dict={X: x_data, Y: y_data}), sess.run([W1, W2])) # accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a)
3. TensorBoard를 사용하기 위한 5단계 step
1) 기록할 tensor를 선택
tf.name_scope
- TensorFlow 그래프의 노드가 많을 경우 쉽게 볼 수 없기 때문에 단순화 필요
- Name scope를 이용하여 이름을 가진 각 노드들의 범주를 지정해줄 수 있음
- Name scope를 지정하면 계층 구조의 맨 위만 표시됨
- Name scope가 넓을수록 시각화가 잘 됨
종류
tf.summary.scalar(name, value)
- Scalar형 텐서 사용(단일값을 가지는 텐서형만 사용)
- 주로 accuracy나 cost(loss)와 같은 scalar 텐서에 사용
with tf.name_scope('cost') as scope: cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) cost_summ = tf.summary.scalar('cost', cost)
tf.summary.histogram(name, value)
- 값에 대한 분포도를 보고자 할 때 사용 가능
- 다차원 tensor형에 사용
- 입력데이터에 대한 분포도, weight, bias값의 변화를 모니터링할 수 있음
with tf.name_scope('layer1') as scope: W1 = tf.Variable(tf.random_normal([2, 2]), name='weight1') b1 = tf.Variable(tf.random_normal([2]), name='bias1') layer1 = tf.sigmoid(tf.matmul(X, W1) + b1) w1_hist = tf.summary.histogram('weight1', W1) b1_hist = tf.summary.histogram('bias1', b1) layer1_hist = tf.summary.histogram('layer1', layer1)
tf.summary.image(name, tensor, max_outputs)
2) 모든 summary들을 단일 summary로 결합
summary = tf.summary.merge_all
모든 summary를 결합
merged_summary = tf.summary.merge_all()
3) writer를 생성하고 graph를 추가
writer = tf.summary.FileWriter(logdir)
- Summary들을 디스크에 기록, TensorBoard 파일 저장 디렉토리 지정
writer.add_graph(sess.graph)
Graph 추가
writer = tf.summary.FileWriter('Logs/xor_nn') writer.add_graph(sess.graph)
4) summary merge 실행, add_summary(summary)
summary, _ = sess.run([merged_summary, train], feed_dict={X: x_data, Y: y_data})
writer.add_summary(summary, global_step=step)
5) TensorBoard 실행
tensorboard —logdir=./logdir
4. TensorBoard 결과 확인
1) Scalar Graph 확인
2) Graph
5. Multi runs
1) 로그파일 지정
로그파일을 다르게 지정
with tf.name_scope('train') as scope: train = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost) writer = tf.summary.FileWriter('Logs/xor_nn_0.1') (...) with tf.name_scope('train') as scope: train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost) writer = tf.summary.FileWriter('Logs/xor_nn_0.01')
2) TensorBoard 실행
- 상위 폴더 위치에서 실행
그래프 출력
6. Example: MNIST
1) 소스코드
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
import random
tf.set_random_seed(777)
mnist = input_data.read_data_sets("Data/MNIST_data/", one_hot=True)
nb_classes = 10
# MNIST data image of shape 28*28 = 784
X = tf.placeholder(tf.float32, [None, 784])
x_image = tf.reshape(X, [-1, 28, 28, 1], name='x_image')
tf.summary.image('x_image', x_image)
# 0-9 digits recognition = 10 classes
Y = tf.placeholder(tf.float32, [None, nb_classes])
with tf.name_scope('Layer1') as scope:
W1 = tf.Variable(tf.random_normal([784, nb_classes]))
b1 = tf.Variable(tf.random_normal([nb_classes]))
# Hypothesis (using softmax)
hypothesis = tf.nn.softmax(tf.matmul(X, W1) + b1)
with tf.name_scope('Cost') as scope:
cost = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1))
cost_summary = tf.summary.scalar('cost', cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)
# Test model
is_correct = tf.equal(tf.argmax(hypothesis, 1), tf.argmax(Y, 1))
with tf.name_scope('Accuracy') as scope:
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
accuracy_summary = tf.summary.scalar('Accuracy', accuracy)
training_epochs = 15
batch_size = 100
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
merge_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter('Logs/mnist.log')
writer.add_graph(sess.graph)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(mnist.train.num_examples / batch_size)
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
c, _ = sess.run([cost, optimizer], feed_dict={X: batch_xs, Y: batch_ys})
avg_cost += c / total_batch
summary = sess.run(merge_summary, feed_dict={X: batch_xs, Y: batch_ys})
writer.add_summary(summary, global_step=epoch)
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.9f}'.format(avg_cost))
# Test the model using test sets
print("Accuracy: ", accuracy.eval(session=sess,
feed_dict={X: mnist.test.images, Y: mnist.test.labels}))
# sample image show and prediction
r = random.randint(0, mnist.test.num_examples - 1)
print("Label:", sess.run(tf.argmax(mnist.test.labels[r:r+1], 1)))
print("Prediction:", sess.run(tf.argmax(hypothesis, 1),
feed_dict={X: mnist.test.images[r:r+1]}))
print("sample image shape:", mnist.test.images[r:r+1].shape)
plt.imshow(mnist.test.images[r:r+1].reshape(28, 28), cmap='Greys', interpolation='nearest')
plt.show()
2) 그래프
Scalar 그래프
Input Image
Graph
'AI&BigData > Deep Learning' 카테고리의 다른 글
Lab10-1. ReLU: Better non-linearity (0) | 2018.05.18 |
---|---|
Lab09-3. Sigmoid Backpropagation (0) | 2018.05.18 |
Lab09-1. NN for XOR (0) | 2018.04.27 |
Lab08-2. Deep Learning의 기본 개념 (0) | 2018.04.26 |
Lab08-1. Tensor Manipulation (0) | 2018.04.20 |
Lab07-2. MNIST data (0) | 2018.04.18 |
Lab07-1. Application&Tip (0) | 2018.04.18 |
Lab06. Softmax Classification (0) | 2018.04.18 |
Lab05. Logistic classification (0) | 2018.04.18 |