Switch to side-by-side view

--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@
 
 ## Model
 The schematic of the U-Net model I used for this task.
-![image1](https://github.com/limingwu8/Lung-Segmentation/blob/master/images/model.png)
+![image1](https://github.com/limingwu8/Lung-Segmentation/blob/master/images/model.png?raw=true)
 A batch of single channel 512x512 images are feed into the network. The feature extraction is performed by a series of CNN layers. The blue arrow represents a CNN block, which is the combination of a convolution layer, batch normalization layer and ReLU layer. The kernel of the convolution layer has the size 3x3, stride 2, and zero padding. The double-arrow denotes the feature concatenation. Finally, a batch of 512x512x1 probability matrix is output to represent the segmented image. The binary cross-entropy loss is calculated between the input image and the output prediction. The Adam optimizer is used with learning rate 1e-3 and weight decay 1e-4. Since the huge amount of parameters in U-Net, the model is parallelized in two Nvidia GTX 1080 graphic cards with 8 images for one batch. 
 
 ## Evaluation