找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Compression Schemes for Mining Large Datasets; A Machine Learning P T. Ravindra Babu,M. Narasimha Murty,S.V. Subrahman Book 2013 Springer-V

[复制链接]
查看: 32987|回复: 42
发表于 2025-3-21 16:43:22 | 显示全部楼层 |阅读模式
书目名称Compression Schemes for Mining Large Datasets
副标题A Machine Learning P
编辑T. Ravindra Babu,M. Narasimha Murty,S.V. Subrahman
视频videohttp://file.papertrans.cn/232/231990/231990.mp4
概述Examines all aspects of data abstraction generation using a least number of database scans.Discusses compressing data through novel lossy and non-lossy schemes.Proposes schemes for carrying out cluste
丛书名称Advances in Computer Vision and Pattern Recognition
图书封面Titlebook: Compression Schemes for Mining Large Datasets; A Machine Learning P T. Ravindra Babu,M. Narasimha Murty,S.V. Subrahman Book 2013 Springer-V
描述This book addresses the challenges of data abstraction generation using a least number of database scans, compressing data through novel lossy and non-lossy schemes, and carrying out clustering and classification directly in the compressed domain. Schemes are presented which are shown to be efficient both in terms of space and time, while simultaneously providing the same or better classification accuracy. Features: describes a non-lossy compression scheme based on run-length encoding of patterns with binary valued features; proposes a lossy compression scheme that recognizes a pattern as a sequence of features and identifying subsequences; examines whether the identification of prototypes and features can be achieved simultaneously through lossy compression and efficient clustering; discusses ways to make use of domain knowledge in generating abstraction; reviews optimal prototype selection using genetic algorithms; suggests possible ways of dealing with big data problems using multiagent systems.
出版日期Book 2013
关键词Classification; Clustering; Data Abstraction Generation; Data Compression; High-Dimensional Datasets
版次1
doihttps://doi.org/10.1007/978-1-4471-5607-9
isbn_softcover978-1-4471-7055-6
isbn_ebook978-1-4471-5607-9Series ISSN 2191-6586 Series E-ISSN 2191-6594
issn_series 2191-6586
copyrightSpringer-Verlag London 2013
The information of publication is updating

书目名称Compression Schemes for Mining Large Datasets影响因子(影响力)




书目名称Compression Schemes for Mining Large Datasets影响因子(影响力)学科排名




书目名称Compression Schemes for Mining Large Datasets网络公开度




书目名称Compression Schemes for Mining Large Datasets网络公开度学科排名




书目名称Compression Schemes for Mining Large Datasets被引频次




书目名称Compression Schemes for Mining Large Datasets被引频次学科排名




书目名称Compression Schemes for Mining Large Datasets年度引用




书目名称Compression Schemes for Mining Large Datasets年度引用学科排名




书目名称Compression Schemes for Mining Large Datasets读者反馈




书目名称Compression Schemes for Mining Large Datasets读者反馈学科排名




单选投票, 共有 1 人参与投票
 

0票 0.00%

Perfect with Aesthetics

 

0票 0.00%

Better Implies Difficulty

 

0票 0.00%

Good and Satisfactory

 

1票 100.00%

Adverse Performance

 

0票 0.00%

Disdainful Garbage

您所在的用户组没有投票权限
发表于 2025-3-21 22:46:34 | 显示全部楼层
发表于 2025-3-22 01:10:51 | 显示全部楼层
,1919–1923 “Independence or Death!”,vide a better classification accuracy than the original dataset. In this direction, we implement the proposed scheme on two large datasets, one with binary-valued features and the other with float-point-valued features. At the end of the chapter, we provide bibliographic notes and a list of referenc
发表于 2025-3-22 06:10:46 | 显示全部楼层
发表于 2025-3-22 09:15:17 | 显示全部楼层
Product Mix and Diversification,ow the divide-and-conquer approach of multiagent systems improves handling huge datasets. We propose four multiagent systems that can help generating abstraction with big data. We provide suggested reading and bibliographic notes. A list of references is provided in the end.
发表于 2025-3-22 15:58:04 | 显示全部楼层
2191-6586 e in generating abstraction; reviews optimal prototype selection using genetic algorithms; suggests possible ways of dealing with big data problems using multiagent systems.978-1-4471-7055-6978-1-4471-5607-9Series ISSN 2191-6586 Series E-ISSN 2191-6594
发表于 2025-3-22 18:20:05 | 显示全部楼层
Data Mining Paradigms,n intermediate representation. The discussion on classification includes topics such as incremental classification and classification based on intermediate abstraction. We further discuss frequent-itemset mining with two directions such as divide-and-conquer itemset mining and intermediate abstracti
发表于 2025-3-23 00:18:28 | 显示全部楼层
Dimensionality Reduction by Subsequence Pruning,earest neighbors. This results in lossy compression in two levels. Generating compressed testing data forms an interesting scheme too. We demonstrate significant reduction in data and its working on large handwritten digit data. We provide bibliographic notes and references at the end of the chapter
发表于 2025-3-23 02:14:38 | 显示全部楼层
Data Compaction Through Simultaneous Selection of Prototypes and Features,vide a better classification accuracy than the original dataset. In this direction, we implement the proposed scheme on two large datasets, one with binary-valued features and the other with float-point-valued features. At the end of the chapter, we provide bibliographic notes and a list of referenc
发表于 2025-3-23 06:57:02 | 显示全部楼层
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 吾爱论文网 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
QQ|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-8-24 03:44
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表