图书介绍

深度学习 影印版【2025|PDF|Epub|mobi|kindle电子书版本百度云盘下载】

深度学习 影印版
  • JoshPatterson,AdamGibson著 著
  • 出版社: 南京:东南大学出版社
  • ISBN:9787564175160
  • 出版时间:2018
  • 标注页数:510页
  • 文件大小:60MB
  • 文件页数:532页
  • 主题词:机器学习-英文

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快]温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页直链下载[便捷但速度慢]  [在线试读本书]   [在线获取解压码]

下载说明

深度学习 影印版PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

1.A Review of Machine Learning1

The Learning Machines1

How Can Machines Learn?2

Biological Inspiration4

What Is Deep Learning?6

Going Down the Rabbit Hole7

Framing the Questions8

The Math Behind Machine Learning:Linear Algebra8

Scalars9

Vectors9

Matrices10

Tensors10

Hyperplanes10

Relevant Mathematical Operations11

Converting Data Into Vectors11

Solving Systems of Equations13

The Math Behind Machine Learning:Statistics15

Probability16

Conditional Probabilities18

Posterior Probability19

Distributions19

Samples Versus Population22

Resampling Methods22

Selection Bias22

Likelihood23

How Does Machine Learning Work?23

Regression23

Classification25

Clustering26

Underfitting and Overfitting26

Optimization27

Convex Optimization29

Gradient Descent30

Stochastic Gradient Descent32

Quasi-Newton Optimization Methods33

Generative Versus Discriminative Models33

Logistic Regression34

The Logistic Function35

Understanding Logistic Regression Output35

Evaluating Models36

The Confusion Matrix36

Building an Understanding of Machine Learning40

2.Foundations of Neural Networks and Deep Learning41

Neural Networks41

The Biological Neuron43

The Perceptron45

Multilayer Feed-Forward Networks50

Training Neural Networks56

Backpropagation Learning57

Activation Functions65

Linear66

Sigmoid66

Tanh67

Hard Tanh68

Softmax68

Rectified Linear69

Loss Functions71

Loss Function Notation71

Loss Functions for Regression72

Loss Functions for Classification75

Loss Functions for Reconstruction77

Hyperparameters78

Learning Rate78

Regularization79

Momentum79

Sparsity80

3.Fundamentals of Deep Networks81

Defining Deep Learning81

What Is Deep Learning?81

Organization of This Chapter91

Common Architectural Principles of Deep Networks92

Parameters92

Layers93

Activation Functions93

Loss Functions95

Optimization Algorithms96

Hyperparameters100

Summary105

Building Blocks of Deep Networks105

RBMs106

Autoencoders112

Variational Autoencoders114

4.Major Architectures of Deep Networks117

Unsupervised Pretrained Networks118

Deep Belief Networks118

Generative Adversarial Networks121

Convolutional Neural Networks(CNNs)125

Biological Inspiration126

Intuition126

CNN Architecture Overview128

Input Layers130

Convolutional Layers130

Pooling Layers140

Fully Connected Layers140

Other Applications of CNNs141

CNNs of Note141

Summary142

Recurrent Neural Networks143

Modeling the Time Dimension143

3D Volumetric Input146

Why Not Markov Models?148

General Recurrent Neural Network Architecture149

LSTM Networks150

Domain-Specific Applications and Blended Networks159

Recursive Neural Networks160

Network Architecture160

Varieties of Recursive Neural Networks161

Applications of Recursive Neural Networks161

Summary and Discussion162

Will Deep Learning Make Other Algorithms Obsolete?162

Different Problems Have Different Best Methods162

When Do I Need Deep Learning?163

5.Building Deep Networks165

Matching Deep Networks to the Right Problem165

Columnar Data and Multilayer Perceptrons166

Images and Convolutional Neural Networks166

Time-series Sequences and Recurrent Neural Networks167

Using Hybrid Networks169

The DL4J Suite of Tools169

Vectorization and DataVec170

Runtimes and ND4J170

Basic Concepts of the DL4J API172

Loading and Saving Models172

Getting Input for the Model173

Setting Up Model Architecture173

Training and Evaluation174

Modeling CSV Data with Multilayer Perceptron Networks175

Setting Up Input Data178

Determining Network Architecture178

Training the Model181

Evaluating the Model181

Modeling Handwritten Images Using CNNs182

Java Code Listing for the LeNet CNN183

Loading and Vectorizing the Input Images185

Network Architecture for LeNet in DL4J186

Training the CNN190

Modeling Sequence Data by Using Recurrent Neural Networks191

Generating Shakespeare via LSTMs191

Classifying Sensor Time-series Sequences Using LSTMs200

Using Autoencoders for Anomaly Detection207

Java Code Listing for Autoencoder Example207

Setting Up Input Data211

Autoencoder Network Architecture and Training211

Evaluating the Model213

Using Variational Autoencoders to Reconstruct MNIST Digits214

Code Listing to Reconstruct MNIST Digits214

Examining the VAE Model217

Applications of Deep Learning in Natural Language Processing221

Learning Word Embedding Using Word2Vec221

Distributed Representations of Sentences with Paragraph Vectors227

Using Paragraph Vectors for Document Classification231

6.Tuning Deep Networks237

Basic Concepts in Tuning Deep Networks237

An Intuition for Building Deep Networks238

Building the Intuition as a Step-by-Step Process239

Matching Input Data and Network Architectures240

Summary241

Relating Model Goal and Output Layers242

Regression Model Output Layer242

Classification Model Output Layer243

Working with Layer Count,Parameter Count,and Memory246

Feed-Forward Multilayer Neural Networks246

Controlling Layer and Parameter Counts247

Estimating Network Memory Requirements250

Weight Initialization Strategies251

Using Activation Functions253

Summary Table for Activation Functions255

Applying Loss Functions256

Understanding Learning Rates258

Using the Ratio of Updates-to-Parameters259

Specific Recommendations for Learning Rates260

How Sparsity Affects Learning263

Applying Methods of Optimization263

SGD Best Practices265

Using Parallelization and GPUs for Faster Training265

Online Learning and Parallel Iterative Algorithms266

Parallelizing SGD in DL4J269

GPUs272

Controlling Epochs and Mini-Batch Size273

Understanding Mini-Batch Size Trade-Offs274

How to Use Regularization275

Priors as Regularizers275

Max-Norm Regularization276

Dropout277

Other Regularization Topics279

Working with Class Imbalance280

Methods for Sampling Classes282

Weighted Loss Functions282

Dealing with Overfitting283

Using Network Statistics from the Tuning UI284

Detecting Poor Weight Initialization287

Detecting Nonshuffled Data288

Detecting Issues with Regularization290

7.Tuning Specific Deep Network Architectures293

Convolutional Neural Networks(CNNs)293

Common Convolutional Architectural Patterns294

Configuring Convolutional Layers297

Configuring Pooling Layers303

Transfer Learning304

Recurrent Neural Networks306

Network Input Data and Input Layers307

Output Layers and RnnOutputLayer308

Training the Network309

Debugging Common Issues with LSTMs311

Padding and Masking312

Evaluation and Scoring With Masking313

Variants of Recurrent Network Architectures314

Restricted Boltzmann Machines314

Hidden Units and Modeling Available Information315

Using Different Units316

Using Regularization with RBMs317

DBNs317

Using Momentum318

Using Regularization319

Determining Hidden Unit Count320

8.Vectorization321

Introduction to Vectorization in Machine Learning321

Why Do We Need to Vectorize Data?322

Strategies for Dealing with Columnar Raw Data Attributes325

Feature Engineering and Normalization Techniques327

Using Data Vec for ETL and Vectorization334

Vectorizing Image Data336

Image Data Representation in DL4J337

Image Data and Vector Normalization with DataVec339

Working with Sequential Data in Vectorization340

Major Variations of Sequential Data Sources340

Vectorizing Sequential Data with DataVec341

Working with Text in Vectorization347

Bag of Words348

TF-IDF349

Comparing Word2Vec and VSM Comparison353

Working with Graphs354

9.Using Deep Learning and DL4J on Spark357

Introduction to Using DL4J with Spark and Hadoop357

Operating Spark from the Command Line360

Configuring and Tuning Spark Execution362

Running Spark on Mesos363

Running Spark on YARN364

General Spark Tuning Guide367

Tuning DL4J Jobs on Spark371

Setting Up a Maven Project Object Model for Spark and DL4J372

A pom.xml File Dependency Template374

Setting Up a POM File for CDH 5.X378

Setting Up a POM File for HDP 2.4378

Troubleshooting Spark and Hadoop379

Common Issues with ND4J380

DL4J Parallel Execution on Spark381

A Minimal Spark Training Example383

DL4J API Best Practices for Spark385

Multilayer Perceptron Spark Example387

Setting Up MLP Network Architecture for Spark390

Distributed Training and Model Evaluation390

Building and Executing a DL4J Spark Job392

Generating Shakespeare Text with Spark and Long Short-Term Memory392

Setting Up the LSTM Network Architecture395

Training,Tracking Progress,and Understanding Results396

Modeling MNIST with a Convolutional Neural Network on Spark397

Configuring the Spark Job and Loading MNIST Data400

Setting Up the LeNet CNN Architecture and Training401

A.What Is Artificial Intelligence?405

B.RL4J and Reinforcement Learning417

C.Numbers Everyone Should Know441

D.Neural Networks and Backpropagation:A Mathematical Approach443

E.Using the ND4J API449

F.Using DataVec463

G.Working with DL4J from Source475

H.Setting Up DL4J Projects477

I.Setting Up GPUs for DL4J Projects483

J.Troubleshooting DL4J Installations487

Index495

热门推荐