找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu

[复制链接]
查看: 28114|回复: 50
发表于 2025-3-21 16:08:50 | 显示全部楼层 |阅读模式
书目名称Introduction to Deep Learning
副标题From Logical Calculu
编辑Sandro Skansi
视频video
概述Offers a welcome clarity of expression, maintaining mathematical rigor yet presenting the ideas in an intuitive and colourful manner.Includes references to open problems studied in other disciplines,
丛书名称Undergraduate Topics in Computer Science
图书封面Titlebook: Introduction to Deep Learning; From Logical Calculu Sandro Skansi Textbook 2018 Springer International Publishing AG, part of Springer Natu
描述.This textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website..Topics and features: introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning; discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network; examines convolutional neural networks, and the recurrent connections to a feed-forward neural network; describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning; presents a brief history of artificial intellige
出版日期Textbook 2018
关键词Deep learning; Neural networks; Pattern recognition; Natural language processing; Autoencoders
版次1
doihttps://doi.org/10.1007/978-3-319-73004-2
isbn_softcover978-3-319-73003-5
isbn_ebook978-3-319-73004-2Series ISSN 1863-7310 Series E-ISSN 2197-1781
issn_series 1863-7310
copyrightSpringer International Publishing AG, part of Springer Nature 2018
The information of publication is updating

书目名称Introduction to Deep Learning影响因子(影响力)




书目名称Introduction to Deep Learning影响因子(影响力)学科排名




书目名称Introduction to Deep Learning网络公开度




书目名称Introduction to Deep Learning网络公开度学科排名




书目名称Introduction to Deep Learning被引频次




书目名称Introduction to Deep Learning被引频次学科排名




书目名称Introduction to Deep Learning年度引用




书目名称Introduction to Deep Learning年度引用学科排名




书目名称Introduction to Deep Learning读者反馈




书目名称Introduction to Deep Learning读者反馈学科排名




单选投票, 共有 0 人参与投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用户组没有投票权限
发表于 2025-3-21 21:24:29 | 显示全部楼层
发表于 2025-3-22 01:15:13 | 显示全部楼层
发表于 2025-3-22 07:33:15 | 显示全部楼层
Feedforward Neural Networks,present these abstract and graphical objects as mathematical objects (vectors, matrices and tensors). Rosenblatt’s perceptron rule is also presented in detail, which makes it clear that a multilayered perceptron is impossible. The Delta rule, as an alternative, is presented, and the idea of iterativ
发表于 2025-3-22 10:01:16 | 显示全部楼层
Modifications and Extensions to a Feed-Forward Neural Network,em of local minima as one of the main problems in machine learning is explored with all of its intricacies. The main strategy against local minima is the idea of regularization, by adding a regularization parameter when learning. Both L1 and L2 regularizations are explored and explained in detail. T
发表于 2025-3-22 16:33:35 | 显示全部楼层
Convolutional Neural Networks,regression accepts data, and defines 1D and 2D convolutional layers as a natural extension of the logistic regression. The chapter also details on how to connect the layers and dimensionality problems. The local receptive field is introduced as a core concept of any convolutional architecture and th
发表于 2025-3-22 17:12:58 | 显示全部楼层
Recurrent Neural Networks, basic settings of learning (sequence to label, sequence to sequence of labels and sequences with no labels) are introduced and explained in probabilistic terms. The role of hidden states is presented in a detailed exposition (with abundant illustrations) in the setting of a simple recurrent network
发表于 2025-3-22 22:21:13 | 显示全部楼层
Autoencoders,was left out in Chap. ., completing the exposition of the principal component analysis, and demonstrating what a distributed representation is in mathematical terms. The chapter then introduces the main unsupervised learning technique for deep learning, the autoencoder. The structural aspects are pr
发表于 2025-3-23 05:02:37 | 显示全部楼层
发表于 2025-3-23 06:13:04 | 显示全部楼层
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-5-22 00:38
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表