Published by The MIT Press, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Seller: HPB-Red, Dallas, TX, U.S.A.
Hardcover. Condition: Good. Connecting readers with great books since 1972! Used textbooks may not include companion materials such as access codes, etc. May have some wear or writing/highlighting. We ship orders daily and Customer Service is our top priority!.
Published by MIT Press, Cambridge, MA, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Cloth. Condition: Very Good to Near Fine. 396 pp. Tightly bound. Corners not bumped. Text is free of markings. The letter "T" stamp on bottom fore-edge.
Published by MIT Press, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Seller: Ergodebooks, Houston, TX, U.S.A.
Hardcover. Condition: Good. Illustrated. Solutions for learning from large scale datasets, including kernel learning algorithms that scale linearly with the volume of the data and experiments carried out on realistically large datasets.Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.ContributorsLéon Bottou, Yoshua Bengio, Stéphane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Galle Loosli, Joaquin Quionero-Candela, Carl Edward Rasmussen, Gunnar Rtsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Sren Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-Tov.
Published by LAP LAMBERT Academic Publishing, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Seller: Ammareal, Morangis, France
Softcover. Condition: Bon. Ancien livre de bibliothèque. Edition 2011. Ammareal reverse jusqu'à 15% du prix net de cet article à des organisations caritatives. ENGLISH DESCRIPTION Book Condition: Used, Good. Former library book. Edition 2011. Ammareal gives back up to 15% of this item's net price to charity organizations.
Published by MIT Press, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Seller: Books Puddle, New York, NY, U.S.A.
Condition: New. pp. xii + 396.
Published by MIT Press, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Seller: Majestic Books, Hounslow, United Kingdom
Condition: New. pp. xii + 396 116 Illus.
Published by MIT Press, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Seller: ALLBOOKS1, Parafield, SA, Australia
Published by LAP LAMBERT Academic Publishing, 2011
ISBN 10: 384654146X ISBN 13: 9783846541463
Seller: AHA-BUCH GmbH, Einbeck, Germany
Taschenbuch. Condition: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Classification algorithms have been widely used in many application domains. Most of these domains deal with massive collection of data and hence demand classification algorithms that scale well with the size of the data sets involved. A classification algorithm is said to be scalable if there is no significant increase in time and space requirements for the algorithm (without compromising the generalization performance) when dealing with an increase in the training set size. Support Vector Machine (SVM) is one of the most celebrated kernel based classification methods used in Machine Learning. An SVM capable of handling large scale classification problems will definitely be an ideal candidate in many real world applications. The training process involved in SVM classifier is usually formulated as a Quadratic Programing (QP) problem. The existing solution strategies for this problem have an associated time and space complexity that is (at least) quadratic in the number of training points. It makes SVM training very expensive. This thesis addresses the scalability of the training algorithms involved in SVM to make it feasible with large training data sets.
Published by The MIT Press, 2007
ISBN 10: 0262026252 ISBN 13: 9780262026253
Seller: Iridium_Books, DH, SE, Spain
Hardcover. Condition: Good. 0262026252.