|
a |
|
b/combinedDeepLearningActiveContour/functions/feedForwardAutoencoder.m |
|
|
1 |
function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data) |
|
|
2 |
|
|
|
3 |
% theta: trained weights from the autoencoder |
|
|
4 |
% visibleSize: the number of input units (probably 64) |
|
|
5 |
% hiddenSize: the number of hidden units (probably 25) |
|
|
6 |
% data: Our matrix containing the training data as columns. So, data(:,i) is the i-th training example. |
|
|
7 |
|
|
|
8 |
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this |
|
|
9 |
% follows the notation convention of the lecture notes. |
|
|
10 |
|
|
|
11 |
W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); |
|
|
12 |
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); |
|
|
13 |
|
|
|
14 |
%% ---------- YOUR CODE HERE -------------------------------------- |
|
|
15 |
% Instructions: Compute the activation of the hidden layer for the Sparse Autoencoder. |
|
|
16 |
|
|
|
17 |
z2=W1*data+repmat(b1,1,size(data,2)); |
|
|
18 |
activation=sigmoid(z2); |
|
|
19 |
%------------------------------------------------------------------- |
|
|
20 |
|
|
|
21 |
end |
|
|
22 |
|
|
|
23 |
%------------------------------------------------------------------- |
|
|
24 |
% Here's an implementation of the sigmoid function, which you may find useful |
|
|
25 |
% in your computation of the costs and the gradients. This inputs a (row or |
|
|
26 |
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). |
|
|
27 |
|
|
|
28 |
function sigm = sigmoid(x) |
|
|
29 |
sigm = 1 ./ (1 + exp(-x)); |
|
|
30 |
end |