Daniel Voigt Godoy - Deep Learning with PyTorch Step-by-Step A Beginner’s Guide-leanpub

peiying410632
from peiying410632 More from this publisher
22.02.2024 Views

true_b = 1true_w = 2N = 100# Data Generationnp.random.seed(42)# We divide w by 10bad_w = true_w / 10# And multiply x by 10bad_x = np.random.rand(N, 1) * 10# So, the net effect on y is zero - it is still# the same as beforey = true_b + bad_w * bad_x + (.1 * np.random.randn(N, 1))Then, I performed the same split as before for both original and bad datasets andplotted the training sets side-by-side, as you can see below:# Generates train and validation sets# It uses the same train_idx and val_idx as before,# but it applies to bad_xbad_x_train, y_train = bad_x[train_idx], y[train_idx]bad_x_val, y_val = bad_x[val_idx], y[val_idx]Figure 0.13 - Same data, different scales for feature xStep 4 - Update the Parameters | 49

The only difference between the two plots is the scale of feature x. Its range was[0, 1], now it is [0, 10]. The label y hasn’t changed, and I did not touch true_b.Does this simple scaling have any meaningful impact on our gradient descent?Well, if it hadn’t, I wouldn’t be asking it, right? Let’s compute a new loss surface andcompare to the one we had before.Figure 0.14 - Loss surface—before and after scaling feature x (Obs.: left plot looks a bit differentthan Figure 0.6 because it is centered at the "after" minimum)Look at the contour values of Figure 0.14: The dark blue line was 3.0, and now it is50.0! For the same range of parameter values, loss values are much higher.50 | Chapter 0: Visualizing Gradient Descent

true_b = 1

true_w = 2

N = 100

# Data Generation

np.random.seed(42)

# We divide w by 10

bad_w = true_w / 10

# And multiply x by 10

bad_x = np.random.rand(N, 1) * 10

# So, the net effect on y is zero - it is still

# the same as before

y = true_b + bad_w * bad_x + (.1 * np.random.randn(N, 1))

Then, I performed the same split as before for both original and bad datasets and

plotted the training sets side-by-side, as you can see below:

# Generates train and validation sets

# It uses the same train_idx and val_idx as before,

# but it applies to bad_x

bad_x_train, y_train = bad_x[train_idx], y[train_idx]

bad_x_val, y_val = bad_x[val_idx], y[val_idx]

Figure 0.13 - Same data, different scales for feature x

Step 4 - Update the Parameters | 49

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!