Other testing

Contents:

no. accuracy architecture difference from original
#1
(original)
0.65 (conv + pool)*3 + fc*1 + softmax (only one channel)
#2 0.67 (conv + pool)*3 + fc*1 + softmax 96 filters each conv
#3 0.72 conv*5 + pool*1 + conv*3 + pool*1 + fc(dropout)*1 + fc*1 + softmax three channels
#4 0.74 conv*5 + pool*1 + conv*3 + pool*1 + fc(dropout)*1 + fc*1 + softmax three channels &
#5 0.77 (conv + norm + pool)*2 + conv*3 + pool*1 + (fc + dropout)*2 + softmax Alexnet
#6 0.77 (conv + norm + pool)*2 + conv*3 + pool*1 + (fc + dropout)*2 + softmax Alexnet (different filter_size from #5)

#1

[Accuracy]: 0.65

[Architecture]

  • input layer: 1 * 32 * 32
  • convolution1 layer
    • 16 filters
    • filter size: (3, 3)
  • pool1 layer:
    • pool size: (2, 2)
  • convolution2 layer:
    • 32 filters
    • filter size: (2, 2)
  • pool2 layer:
    • pool size:
  • convolution3 layer:
    • 64 filters
    • filter size: (2, 2)
  • pool3 layer:
    • pool size
  • hidden4 layer:
    • 200 units
  • softmax output layer: 10 units

Parameters:

#layer parameters:
input_shape = (None, 1, 32, 32),
conv1_num_filters = 16, conv1_filter_size = (3, 3), pool1_pool_size = (2,2),
conv2_num_filters = 32, conv2_filter_size = (2, 2) , pool2_pool_size =  (2,2),
conv3_num_filters = 64, conv3_filter_size = (2, 2), pool3_pool_size = (2,2),
hidden4_num_units = 200,
output_nonlinearity = softmax,
output_num_units = 10,

#optimization parameters:
update = nesterov_momentum,
update_learning_rate = 0.005,
update_momentum = 0.9,
regression = False,
max_epochs = 500,
verbose = 1

Result: [test_1.txt]

(補圖)

  epoch    trn loss    val loss    trn/val    valid acc  dur
-------  ----------  ----------  ---------  -----------  ------
(etc.)
    492     0.00002     5.16825    0.00000      0.66160  21.66s
    493     0.00002     5.16888    0.00000      0.66160  21.53s
    494     0.00002     5.16950    0.00000      0.66160  21.54s
    495     0.00002     5.17013    0.00000      0.66160  21.59s
    496     0.00002     5.17075    0.00000      0.66150  21.51s
    497     0.00002     5.17138    0.00000      0.66150  21.52s
    498     0.00002     5.17199    0.00000      0.66150  21.51s
    499     0.00002     5.17261    0.00000      0.66150  21.53s
    500     0.00002     5.17324    0.00000      0.66150  21.54s
The accuracy of this network is: 0.65

#2

[Accuracy]: 0.67

[Architecture]

  • input layer: 1 * 32 * 32
  • convolution1 layer
    • 96 filters
    • filter size: (3, 3)
  • pool1 layer:
    • pool size: (2, 2)
  • convolution2 layer:
    • 96 filters
    • filter size: (2, 2)
  • pool2 layer:
    • pool size:
  • convolution3 layer:
    • 96 filters
    • filter size: (2, 2)
  • pool3 layer:
    • pool size
  • hidden4 layer:
    • 200 units
  • softmax output layer: 10 units

Parameters:

#layer parameters:
input_shape = (None, 1, 32, 32),
conv1_num_filters = 96, conv1_filter_size = (3, 3), pool1_pool_size = (2,2),
conv2_num_filters = 96, conv2_filter_size = (2, 2) , pool2_pool_size =  (2,2),
conv3_num_filters = 96, conv3_filter_size = (2, 2), pool3_pool_size = (2,2),
hidden4_num_units = 200,
output_nonlinearity = softmax,
output_num_units = 10,

#optimization parameters:
update = nesterov_momentum,
update_learning_rate = 0.005,
update_momentum = 0.9,
regression = False,
max_epochs = 500,
verbose = 1

Result: [test_2.txt]

(補圖)

epoch    trn loss    val loss    trn/val    valid acc  dur
-------  ----------  ----------  ---------  -----------  ------
(etc.)
  492     0.00003     4.17204    0.00001      0.67850  113.06s
  493     0.00003     4.17267    0.00001      0.67850  113.08s
  494     0.00003     4.17331    0.00001      0.67850  113.17s
  495     0.00003     4.17395    0.00001      0.67850  113.08s
  496     0.00003     4.17458    0.00001      0.67850  113.24s
  497     0.00003     4.17521    0.00001      0.67850  107.29s
  498     0.00003     4.17585    0.00001      0.67840  113.90s
  499     0.00003     4.17648    0.00001      0.67840  114.18s
  500     0.00003     4.17711    0.00001      0.67840  109.93s
The accuracy of this network is: 0.67

#3

[Accuracy]: 0.72

[Architecture]

  • input layer: 3 * 32 * 32
  • convolution11 layer
    • 96 filters
    • filter size: (5, 5)
  • convolution12 ~ convolution15 layers
    • 96 filters each layer
    • each filter size: (3, 3)
  • pool1 layer:
    • pool size: (2, 2)
  • convolution21 ~ convolution23 layer:
    • 128 filters
    • filter size: (3, 3)
  • pool2 layer:
    • pool size: (2, 2)
  • fully-connect3 layer:
    • 64 units (with dropout)
  • fully-connect4 layer:
    • 64 units
  • fully-connect5 layer: 10 units (softmax output)

Normalization:

Parameters:

# Normalization
X_train_2d = X_train/255.0 - 0.5
X_test_2d = X_test/255.0 - 0.5

#layer parameters:
input_shape = (None, 3, 32, 32),
conv11_num_filters = 96, conv11_filter_size = (5, 5),
conv12_num_filters = 96, conv12_filter_size = (3, 3),
conv13_num_filters = 96, conv13_filter_size = (3, 3),
conv14_num_filters = 96, conv14_filter_size = (3, 3),
conv15_num_filters = 96, conv15_filter_size = (3, 3),
pool1_pool_size = (2, 2),

conv21_num_filters = 128, conv21_filter_size = (3, 3),
conv22_num_filters = 128, conv22_filter_size = (3, 3),
conv23_num_filters = 128, conv23_filter_size = (3, 3),
pool2_pool_size = (2, 2),

fc3_num_units = 64,
fc4_num_units = 64,
fc5_nonlinearity = softmax,
fc5_num_units = 10,

#optimization parameters:
update = nesterov_momentum,
update_learning_rate = 0.005,
update_momentum = 0.9,
regression = False,
max_epochs = 5000,
verbose = 1

Result: [test_3.txt]

## Layer information

  #  name    size
---  ------  --------
  0  input   3x32x32
  1  conv11  96x28x28
  2  conv12  96x26x26
  3  conv13  96x24x24
  4  conv14  96x22x22
  5  conv15  96x20x20
  6  pool1   96x10x10
  7  conv21  128x8x8
  8  conv22  128x6x6
  9  conv23  128x4x4
 10  pool2   128x2x2
 11  fc3     64
 12  drop3   64
 13  fc4     64
 14  fc5     10

  epoch    trn loss    val loss    trn/val    valid acc  dur
-------  ----------  ----------  ---------  -----------  ------
      1     2.25123     2.08595    1.07924      0.17830  18.35s
      2     2.01315     1.88202    1.06967      0.25520  18.51s
      3     1.87720     1.78446    1.05197      0.33000  18.58s
      4     1.77388     1.68541    1.05249      0.37030  18.72s
      5     1.65891     1.52453    1.08815      0.43750  18.80s
      6     1.57187     1.42692    1.10158      0.47290  19.29s
      7     1.49444     1.39737    1.06946      0.48400  19.23s
      8     1.42332     1.31857    1.07945      0.52020  19.28s
      9     1.36255     1.24562    1.09387      0.54730  19.54s
     10     1.30081     1.32667    0.98051      0.52650  19.40s
     11     1.25665     1.14322    1.09922      0.58940  19.58s
(etc.)
   2756     0.00223     2.24900    0.00099      0.72500  19.54s
   2757     0.00426     2.20700    0.00193      0.72220  19.82s
   2758     0.00348     2.28179    0.00152      0.72780  19.47s
   2759     0.00530     2.22349    0.00238      0.72790  19.56s
   2760     0.00192     2.39914    0.00080      0.72560  19.85s
   2761     0.00700     2.25151    0.00311      0.73000  19.65s
   2762     0.00500     2.26098    0.00221      0.71980  19.46s
   2763     0.00741     2.27235    0.00326      0.72010  19.61s
   2764     0.01274     2.21007    0.00577      0.71170  19.61s
   2765     0.00881     2.24880    0.00392      0.71760  19.81s
   2766     0.01241     2.13093    0.00582      0.71760  19.56s
   2767     0.00931     2.03755    0.00457      0.71560  19.56s
   2768     0.00657     2.09411    0.00314      0.72430  19.53s
   2769     0.00299     2.10017    0.00142      0.72870  19.98s
   2770     0.00297     2.18330    0.00136      0.72870  19.51s
(etc.)

The accuracy of this network is about 0.72.

#4

[Accuracy]: 0.74

[Architecture]

  • input layer: 3 * 32 * 32
  • convolution11 layer
    • 96 filters
    • filter size: (5, 5)
  • convolution12 ~ convolution15 layers
    • 96 filters each layer
    • each filter size: (3, 3)
  • pool1 layer:
    • pool size: (2, 2)
  • convolution21 ~ convolution23 layer:
    • 128 filters
    • filter size: (3, 3)
  • pool2 layer:
    • pool size: (2, 2)
  • fully-connect3 layer:
    • 64 units (with dropout)
  • fully-connect4 layer:
    • 64 units
  • fully-connect5 layer: 10 units (softmax output)

Normalization:

Parameters:

# Normalization: X = ( X - X.mean ) / X.std
X_train_2d = ( X_train - X_train.mean() ) / X_train.std()
X_test_2d = ( X_test - X_test.mean() ) / X_test.std()

#layer parameters:
input_shape = (None, 3, 32, 32),
conv11_num_filters = 96, conv11_filter_size = (5, 5),
conv12_num_filters = 96, conv12_filter_size = (3, 3),
conv13_num_filters = 96, conv13_filter_size = (3, 3),
conv14_num_filters = 96, conv14_filter_size = (3, 3),
conv15_num_filters = 96, conv15_filter_size = (3, 3),
pool1_pool_size = (2, 2),

conv21_num_filters = 128, conv21_filter_size = (3, 3),
conv22_num_filters = 128, conv22_filter_size = (3, 3),
conv23_num_filters = 128, conv23_filter_size = (3, 3),
pool2_pool_size = (2, 2),

fc3_num_units = 64,
fc4_num_units = 64,
fc5_nonlinearity = softmax,
fc5_num_units = 10,

#optimization parameters:
update = nesterov_momentum,
update_learning_rate = 0.005,
update_momentum = 0.9,
regression = False,
max_epochs = 1000,
verbose = 1,

Result: [test_4.txt]

## Layer information

  #  name    size
---  ------  --------
  0  input   3x32x32
  1  conv11  96x28x28
  2  conv12  96x26x26
  3  conv13  96x24x24
  4  conv14  96x22x22
  5  conv15  96x20x20
  6  pool1   96x10x10
  7  conv21  128x8x8
  8  conv22  128x6x6
  9  conv23  128x4x4
 10  pool2   128x2x2
 11  fc3     64
 12  drop3   64
 13  fc4     64
 14  fc5     10

  epoch    trn loss    val loss    trn/val    valid acc  dur
-------  ----------  ----------  ---------  -----------  ------
      1     2.21620     1.97200    1.12383      0.27900  17.50s
      2     1.88873     1.69683    1.11309      0.37330  17.64s
      3     1.69117     1.55265    1.08922      0.41270  17.74s
      4     1.56713     1.42391    1.10059      0.46910  17.84s
      5     1.46858     1.35092    1.08710      0.50280  17.90s
      6     1.38012     1.23352    1.11884      0.55270  17.96s
      7     1.30769     1.17323    1.11461      0.58260  17.99s
      8     1.24177     1.13416    1.09488      0.59370  18.09s
      9     1.18278     1.10050    1.07477      0.61150  18.12s
     10     1.12637     1.04672    1.07609      0.62850  18.15s
(etc.)
    991     0.00699     2.29272    0.00305      0.73820  19.12s
    992     0.00485     2.31830    0.00209      0.74690  19.12s
    993     0.00350     2.39263    0.00146      0.74510  19.12s
    994     0.02063     1.83265    0.01125      0.73300  19.12s
    995     0.01246     2.07846    0.00600      0.73490  19.12s
    996     0.02533     1.94419    0.01303      0.74310  19.12s
    997     0.02184     2.10608    0.01037      0.74240  19.12s
    998     0.00938     2.34317    0.00400      0.74750  19.12s
    999     0.01172     2.30217    0.00509      0.74590  19.12s
   1000     0.02268     2.17464    0.01043      0.72460  19.13s

The accuracy of this network is about 0.74.

#5

[Accuracy]: 0.77

[Architecture]

  • input layer: 3 * 32 * 32
  • convolution1 layer
    • 96 filters
    • filter size: (3, 3)
    • zero-padding = 1
    • normalization
  • pool1 layer:
    • pool size: (2, 2)
  • convolution2 layer
    • 96 filters
    • filter size: (2, 2)
    • zero-padding = 1
    • normalization
  • pool2 layer:
    • pool size: (2, 2)
  • convolution3 ~ convolution5 layer:
    • 96 filters
    • filter size: (2, 2)
    • zero-padding = 1
  • pool5 layer:
    • pool size: (2, 2)
  • fully-connect6 layer:
    • 128 units (with dropout)
  • fully-connect7 layer:
    • 128 units (with dropout)
  • fully-connect8 layer: 10 units (softmax output)

Parameters:

#layer parameters:
## input layer
input_shape = (None, X_train_2d.shape[1], X_train_2d.shape[2], X_train_2d.shape[3]),
## layer conv1
conv1_num_filters = 96,
conv1_filter_size = (3, 3),
conv1_pad = 1,
## layer norm1 (normaliztion)
#norm1_incoming = conv1_layer,
## layer pool1
pool1_pool_size = (2,2),

## layer conv2
conv2_num_filters = 96,
conv2_filter_size = (2, 2),
conv2_pad = 1,
## layer norm2 (normaliztion)
#norm2_incoming = conv2_layer,
## layer pool2
pool2_pool_size = (2,2),

## layer conv3
conv3_num_filters = 96,
conv3_filter_size = (2, 2),
conv3_pad = 1,
#conv3_nonlinearity=lasagne.nonlinearities.rectify, # ReLU
#conv3_W=lasagne.init.GlorotUniform(),

## layer conv4
conv4_num_filters = 96,
conv4_filter_size = (2, 2),
conv4_pad = 1,

## layer conv5
conv5_num_filters = 96,
conv5_filter_size = (2, 2),
conv5_pad = 1,
## layer pool5
pool5_pool_size = (2,2),

## layer output1
fc6_nonlinearity = rectify,
fc6_num_units = 128,
## layer output2
fc7_nonlinearity = rectify,
fc7_num_units = 128,
## layer output3 (softmax_output)
fc8_nonlinearity = softmax,
fc8_num_units = 10,

#optimization parameters:
update = nesterov_momentum,
update_learning_rate = 0.005,
update_momentum = 0.9,
regression = False,
max_epochs = 1000,
verbose = 3,

Result: [test_5.txt]

# Neural Network with 475658 learnable parameters

## Layer information

name    size        total    cap.Y    cap.X    cov.Y    cov.X    filter Y    filter X    field Y    field X
------  --------  -------  -------  -------  -------  -------  ----------  ----------  ---------  ---------
input   3x32x32      3072   100.00   100.00   100.00   100.00          32          32         32         32
conv1   96x32x32    98304   100.00   100.00     9.38     9.38           3           3          3          3
norm1   96x32x32    98304   100.00   100.00   100.00   100.00          32          32         32         32
pool1   96x16x16    24576   100.00   100.00   100.00   100.00          32          32         32         32
conv2   96x17x17    27744   100.00   100.00   100.00   100.00          32          32         32         32
norm2   96x17x17    27744   100.00   100.00   100.00   100.00          32          32         32         32
pool2   96x8x8       6144   100.00   100.00   100.00   100.00          32          32         32         32
conv3   96x9x9       7776   100.00   100.00   100.00   100.00          32          32         32         32
conv4   96x10x10     9600   100.00   100.00   100.00   100.00          32               32         32         32
conv5   96x11x11    11616   100.00   100.00   100.00   100.00          32          32         32         32
pool5   96x5x5       2400   100.00   100.00   100.00   100.00          32          32         32         32
fc6     128           128   100.00   100.00   100.00   100.00          32          32         32         32
droup6  128           128   100.00   100.00   100.00   100.00          32          32         32         32
fc7     128           128   100.00   100.00   100.00   100.00          32          32         32         32
droup7  128           128   100.00   100.00   100.00   100.00          32          32         32         32
fc8     10             10   100.00   100.00   100.00   100.00          32          32         32         32

Explanation
    X, Y:    image dimensions
    cap.:    learning capacity
    cov.:    coverage of image
    magenta: capacity too low (<1/6)
    cyan:    image coverage too high (>100%)
    red:     capacity too low and coverage too high


  epoch    trn loss    val loss    trn/val    valid acc  dur
-------  ----------  ----------  ---------  -----------  ------
      1     2.27401     2.15328    1.05607      0.26040  23.90s
      2     2.07928     1.83009    1.13616      0.35590  23.92s
      3     1.84605     1.62356    1.13704      0.41440  23.93s
      4     1.71127     1.51742    1.12776      0.44400  23.95s
      5     1.61711     1.44312    1.12056      0.47570  23.96s
      6     1.54887     1.38150    1.12116      0.50440  23.96s
      7     1.49140     1.32587    1.12484      0.52080  23.96s
      8     1.44208     1.28618    1.12121      0.53260  24.01s
      9     1.39669     1.24262    1.12399      0.54630  24.01s
     10     1.34792     1.20291    1.12055      0.56170  24.00s
     11     1.30831     1.15946    1.12837      0.57790  24.01s
     12     1.26420     1.10497    1.14410      0.60290  24.01s
     13     1.21197     1.07574    1.12664      0.61070  24.01s
     14     1.17215     1.02685    1.14151      0.63870  24.01s
     15     1.12839     0.99114    1.13848      0.64440  24.01s
     16     1.09300     0.98255    1.11241      0.64880  24.01s
     17     1.05941     0.93162    1.13717      0.66950  24.01s
     18     1.02647     0.90571    1.13333      0.67860  24.01s
(etc.)
    991     0.00827     3.21902    0.00257      0.77400  24.02s
    992     0.00801     3.15738    0.00254      0.77490  24.02s
    993     0.01455     3.08897    0.00471      0.77360  24.02s
    994     0.00908     3.14662    0.00288      0.77460  24.01s
    995     0.01122     3.47437    0.00323      0.77130  24.02s
    996     0.01210     3.18974    0.00379      0.76880  24.02s
    997     0.01247     3.07643    0.00405      0.77200  24.02s
    998     0.00770     3.40515    0.00226      0.76830  24.01s
    999     0.01283     3.20475    0.00400      0.77130  24.02s
   1000     0.01426     3.09213    0.00461      0.76670  24.01s

The accuracy of this network is about 0.77.

#6

[Accuracy]: 0.77

[Architecture]

  • input layer: 3 * 32 * 32
  • convolution1 layer
    • 96 filters
    • filter size: (3, 3)
    • zero-padding = 1
    • normalization
  • pool1 layer:
    • pool size: (2, 2)
  • convolution2 layer
    • 96 filters
    • filter size: (2, 2)
    • zero-padding = 1
    • normalization
  • pool2 layer:
    • pool size: (2, 2)
  • convolution3 ~ convolution5 layer:
    • 96 filters
    • filter size: (2, 2)
    • zero-padding = 1
  • pool5 layer:
    • pool size: (2, 2)
  • fully-connect6 layer:
    • 128 units (with dropout)
  • fully-connect7 layer:
    • 128 units (with dropout)
  • fully-connect8 layer: 10 units (softmax output)

Parameters:

#layer parameters:
## input layer
input_shape = (None, X_train_2d.shape[1], X_train_2d.shape[2], X_train_2d.shape[3]),
## layer conv1
conv1_num_filters = 96,
conv1_filter_size = (5, 5),
conv1_pad = 1,
## layer norm1 (normaliztion)
#norm1_incoming = conv1_layer,
## layer pool1
pool1_pool_size = (2,2),

## layer conv2
conv2_num_filters = 96,
conv2_filter_size = (3, 3),
conv2_pad = 1,
## layer norm2 (normaliztion)
#norm2_incoming = conv2_layer,
## layer pool2
pool2_pool_size = (2,2),

## layer conv3
conv3_num_filters = 96,
conv3_filter_size = (3, 3),
conv3_pad = 1,
#conv3_nonlinearity=lasagne.nonlinearities.rectify, # ReLU
#conv3_W=lasagne.init.GlorotUniform(),

## layer conv4
conv4_num_filters = 96,
conv4_filter_size = (3, 3),
conv4_pad = 1,

## layer conv5
conv5_num_filters = 96,
conv5_filter_size = (3, 3),
conv5_pad = 1,
## layer pool5
pool5_pool_size = (2,2),

## layer output1
fc6_nonlinearity = rectify,
fc6_num_units = 128,
## layer output2
fc7_nonlinearity = rectify,
fc7_num_units = 128,
## layer output3 (softmax_output)
fc8_nonlinearity = softmax,
fc8_num_units = 10,

#optimization parameters:
update = nesterov_momentum,
update_learning_rate = 0.003,
update_momentum = 0.9,
regression = False,
max_epochs = 5000,
verbose = 3,

Result: [test_6.txt]

# Neural Network with 475658 learnable parameters

## Layer information

name    size        total    cap.Y    cap.X    cov.Y    cov.X    filter Y    filter X    field Y    field X
------  --------  -------  -------  -------  -------  -------  ----------  ----------  ---------  ---------
input   3x32x32      3072   100.00   100.00   100.00   100.00          32          32         32         32
conv1   96x30x30    86400   100.00   100.00    15.62    15.62           5           5          5          5
norm1   96x30x30    86400   100.00   100.00   100.00   100.00          32          32         32         32
pool1   96x15x15    21600   100.00   100.00   100.00   100.00          32          32         32         32
conv2   96x15x15    21600   100.00   100.00   100.00   100.00          32          32         32         32
norm2   96x15x15    21600   100.00   100.00   100.00   100.00          32          32         32         32
pool2   96x7x7       4704   100.00   100.00   100.00   100.00          32          32         32         32
conv3   96x7x7       4704   100.00   100.00   100.00   100.00          32          32         32         32
conv4   96x7x7       4704   100.00   100.00   100.00   100.00          32          32         32         32
conv5   96x7x7       4704   100.00   100.00   100.00   100.00          32          32         32         32
pool5   96x3x3        864   100.00   100.00   100.00   100.00          32          32         32         32
fc6     128           128   100.00   100.00   100.00   100.00          32          32         32         32
droup6  128           128   100.00   100.00   100.00   100.00          32          32         32         32
fc7     128           128   100.00   100.00   100.00   100.00          32          32         32         32
droup7  128           128   100.00   100.00   100.00   100.00          32          32         32         32
fc8     10             10   100.00   100.00   100.00   100.00          32          32         32         32

Explanation
    X, Y:    image dimensions
    cap.:    learning capacity
    cov.:    coverage of image
    magenta: capacity too low (<1/6)
    cyan:    image coverage too high (>100%)
    red:     capacity too low and coverage too high


  epoch    trn loss    val loss    trn/val    valid acc  dur
-------  ----------  ----------  ---------  -----------  ------
      1     2.30376     2.30088    1.00125      0.13160  23.14s
      2     2.30015     2.29713    1.00132      0.17610  23.18s
      3     2.29647     2.29189    1.00200      0.20090  23.22s
      4     2.29179     2.28441    1.00323      0.21650  23.25s
      5     2.28456     2.27197    1.00554      0.22350  23.23s
      6     2.27214     2.24981    1.00993      0.22890  23.24s
      7     2.25237     2.21535    1.01671      0.24100  23.23s
      8     2.22794     2.17579    1.02397      0.24350  23.24s
      9     2.19441     2.13039    1.03005      0.24850  23.28s
     10     2.16467     2.09061    1.03543      0.25330  23.29s
(etc.)
   4990     0.00106     3.27676    0.00032      0.77860  23.33s
   4991     0.00117     3.20488    0.00036      0.78040  23.33s
   4992     0.00109     3.15882    0.00034      0.78090  23.33s
   4993     0.00110     3.09296    0.00036      0.78020  23.32s
   4994     0.00095     3.08368    0.00031      0.78350  23.33s
   4995     0.00114     3.17590    0.00036      0.78150  23.33s
   4996     0.00067     3.11152    0.00021      0.78170  23.32s
   4997     0.00097     3.14198    0.00031      0.78350  23.32s
   4998     0.00082     3.14036    0.00026      0.78120  23.33s
   4999     0.00134     3.13444    0.00043      0.77900  23.33s
   5000     0.00127     3.26815    0.00039      0.77760  23.33s
The accuracy of this network is: 0.77

results matching ""

    No results matching ""