Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.

*"synopsis" may belong to another edition of this title.*

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Quantity Available: 1

Seller:

Rating

**Book Description **Condition: New. Seller Inventory # UK16637

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Quantity Available: 1

Seller:

Rating

**Book Description **Condition: New. New, Fast Delivery , 100 % money back if any problem with product and services. Seller Inventory # ABECA18374

Published by
Springer
(2016)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Paperback
Quantity Available: 1

Seller:

Rating

**Book Description **Springer, 2016. Paperback. Condition: New. PRINT ON DEMAND Book; New; Publication Year 2016; Not Signed; Fast Shipping from the UK. No. book. Seller Inventory # ria9781852330958_lsuk

Published by
Springer
(1999)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Quantity Available: > 20

Seller:

Rating

**Book Description **Springer, 1999. PAP. Condition: New. New Book. Shipped from US within 10 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Seller Inventory # IQ-9781852330958

Published by
Springer London Ltd, United Kingdom
(1999)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Paperback
Quantity Available: 10

Seller:

Rating

**Book Description **Springer London Ltd, United Kingdom, 1999. Paperback. Condition: New. Language: English. Brand new Book. Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus- sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be- nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5. Softcover reprint of the original 1st ed. 1999. Seller Inventory # LIE9781852330958

Published by
Springer London Ltd, United Kingdom
(1999)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Paperback
Quantity Available: 10

Seller:

Rating

**Book Description **Springer London Ltd, United Kingdom, 1999. Paperback. Condition: New. Language: English. Brand new Book. Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus- sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be- nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5. Softcover reprint of the original 1st ed. 1999. Seller Inventory # AAV9781852330958

Published by
Springer London Ltd, United Kingdom
(1999)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Paperback
Quantity Available: 10

Seller:

Rating

**Book Description **Springer London Ltd, United Kingdom, 1999. Paperback. Condition: New. Language: English . Brand New Book ***** Print on Demand *****.Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the targets ), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus- sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and be- nign Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5. Softcover reprint of the original 1st ed. 1999. Seller Inventory # AAV9781852330958

Published by
Springer
(1999)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Quantity Available: > 20

Seller:

Rating

**Book Description **Springer, 1999. PAP. Condition: New. New Book. Delivered from our UK warehouse in 4 to 14 business days. THIS BOOK IS PRINTED ON DEMAND. Established seller since 2000. Seller Inventory # IQ-9781852330958

Published by
Springer
(1999)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Paperback
Quantity Available: 1

Seller:

Rating

**Book Description **Springer, 1999. Paperback. Condition: NEW. 9781852330958 This listing is a new book, a title currently in-print which we order directly and immediately from the publisher. For all enquiries, please contact Herb Tandree Philosophy Books directly - customer service is our primary goal. Seller Inventory # HTANDREE0415888

Published by
Springer
(2013)

ISBN 10: 1852330953
ISBN 13: 9781852330958

New
Softcover
Quantity Available: 15

Seller:

Rating

**Book Description **Springer, 2013. Condition: New. This item is printed on demand for shipment within 3 working days. Seller Inventory # LP9781852330958