This is a static copy of a profile report

Home

Function details for train>init_modelThis is a static copy of a profile report

Home

train>init_model (Calls: 1, Time: 0.012 s)
Generated 19-Jun-2021 04:01:56 using performance time.
subfunction in file /nfs/inm_phd/07/d07944009/2021/0618-proj6/simpleNN/MATLAB/cnn/train.m
Copy to new window for comparing multiple runs

Parents (calling functions)

Function NameFunction TypeCalls
trainfunction1
Lines where the most time was spent

Line NumberCodeCallsTotal Time% TimeTime Plot
60
model.weight{m} = gpu(ftype(ra...
30.005 s36.9%
61
model.bias{m} = gpu(@zeros, [m...
30.002 s15.2%
67
model.weight{m} = gpu(ftype(ra...
10.001 s10.1%
73
model.var_ptr = [1; cumsum(var...
10.001 s8.3%
52
model.ht_pad(m) = model.ht_inp...
30.001 s5.8%
All other lines  0.003 s23.8%
Totals  0.012 s100% 
Children (called functions)

Function NameFunction TypeCallsTotal Time% TimeTime Plot
gpufunction80.003 s21.5%
ftypefunction40.001 s6.1%
Self time (built-ins, overhead, etc.)  0.009 s72.4%
Totals  0.012 s100% 
Code Analyzer results
Line numberMessage
Coverage results
Show coverage for parent directory
Total lines in function55
Non-code lines (comments, blank lines)9
Code lines (lines that can run)46
Code lines that did run46
Code lines that did not run0
Coverage (did run/can run)100.00 %
Function listing
time 
Calls 
 line
  19 
function model = init_model(net_config)
  20 

< 0.001 
      1 
  21
model = struct; 
< 0.001 
      1 
  22
model.net_config = net_config; 
< 0.001 
      1 
  23
global gpu_use; 
< 0.001 
      1 
  24
model.gpu_use = gpu_use; 
< 0.001 
      1 
  25
global float_type; 
< 0.001 
      1 
  26
model.float_type = float_type; 
  27 

< 0.001 
      1 
  28
LC = net_config.LC; 
< 0.001 
      1 
  29
L = net_config.L; 
  30 

< 0.001 
      1 
  31
model.LC = LC; 
< 0.001 
      1 
  32
model.L = L; 
< 0.001 
      1 
  33
model.nL = net_config.nL; 
  34 

< 0.001 
      1 
  35
model.ht_input = [net_config.ht_input; zeros(LC, 1)];  % height of input image 
< 0.001 
      1 
  36
model.wd_input = [net_config.wd_input; zeros(LC, 1)];  % width of input image 
< 0.001 
      1 
  37
model.ch_input = net_config.ch_input(:);  % #channels of input image 
< 0.001 
      1 
  38
model.wd_pad_added = net_config.wd_pad_added(:);  % width of zero-padding around input image border 
< 0.001 
      1 
  39
model.ht_pad = zeros(LC, 1);  % height of image after padding 
< 0.001 
      1 
  40
model.wd_pad = zeros(LC, 1);  % width of image after padding 
< 0.001 
      1 
  41
model.ht_conv = zeros(LC, 1);  % height of image after convolution 
< 0.001 
      1 
  42
model.wd_conv = zeros(LC, 1);  % width of image after convolution 
< 0.001 
      1 
  43
model.wd_filter = net_config.wd_filter(:);  % width of filter in convolution 
< 0.001 
      1 
  44
model.strides = net_config.strides(:);  % strides of convolution 
< 0.001 
      1 
  45
model.wd_subimage_pool = net_config.wd_subimage_pool(:);  % width of filter in pooling 
< 0.001 
      1 
  46
model.full_neurons = net_config.full_neurons(:);  % #neurons in fully-connected layers 
< 0.001 
      1 
  47
model.weight = cell(L, 1); 
< 0.001 
      1 
  48
model.bias = cell(L, 1); 
< 0.001 
      1 
  49
var_num = zeros(L, 1); 
  50 

< 0.001 
      1 
  51
for m = 1 : LC 
< 0.001 
      3 
  52
	model.ht_pad(m) = model.ht_input(m) + 2*model.wd_pad_added(m); 
< 0.001 
      3 
  53
	model.wd_pad(m) = model.wd_input(m) + 2*model.wd_pad_added(m); 
< 0.001 
      3 
  54
	model.ht_conv(m) = floor((model.ht_pad(m) - model.wd_filter(m))/model.strides(m)) + 1; 
< 0.001 
      3 
  55
	model.wd_conv(m) = floor((model.wd_pad(m) - model.wd_filter(m))/model.strides(m)) + 1; 
< 0.001 
      3 
  56
	model.ht_input(m+1) = floor(model.ht_conv(m)/model.wd_subimage_pool(m)); 
< 0.001 
      3 
  57
	model.wd_input(m+1) = floor(model.wd_conv(m)/model.wd_subimage_pool(m)); 
< 0.001 
      3 
  58
	var_num(m) = model.ch_input(m+1)*(model.wd_filter(m)*model.wd_filter(m)*model.ch_input(m) + 1); 
  59 

  0.005 
      3 
  60
	model.weight{m} = gpu(ftype(randn(model.ch_input(m+1), model.wd_filter(m)*model.wd_filter(m)*model.ch_input(m))*sqrt(2.0/(model.wd_filter(m)*model.wd_filter(m)*model.ch_input(m))))); 
  0.002 
      3 
  61
	model.bias{m} = gpu(@zeros, [model.ch_input(m+1), 1]); 
< 0.001 
      3 
  62
end 
  63 

< 0.001 
      1 
  64
num_neurons_prev = model.ht_input(LC+1)*model.wd_input(LC+1)*model.ch_input(LC+1); 
< 0.001 
      1 
  65
for m = LC+1 : L 
< 0.001 
      1 
  66
	num_neurons = model.full_neurons(m - LC); 
  0.001 
      1 
  67
	model.weight{m} = gpu(ftype(randn(num_neurons, num_neurons_prev)*sqrt(2.0/(num_neurons_prev)))); 
< 0.001 
      1 
  68
	model.bias{m} = gpu(@zeros, [num_neurons, 1]); 
< 0.001 
      1 
  69
	var_num(m) = num_neurons * (num_neurons_prev + 1); 
< 0.001 
      1 
  70
	num_neurons_prev = num_neurons; 
< 0.001 
      1 
  71
end 
  72 
% starting index of trained variables (including biases) for each layer
  0.001 
      1 
  73
model.var_ptr = [1; cumsum(var_num)+1]; 

Other subfunctions in this file are not included in this listing.