amygdala 发表于 2025-3-30 10:50:04
http://reply.papertrans.cn/47/4696/469574/469574_51.png有恶意 发表于 2025-3-30 12:55:13
http://reply.papertrans.cn/47/4696/469574/469574_52.pngCumbersome 发表于 2025-3-30 20:07:48
http://reply.papertrans.cn/47/4696/469574/469574_53.pngKeshan-disease 发表于 2025-3-31 00:14:15
EXiT-B: A New Approach for Extracting Maximal Frequent Subtrees from XML Dataf our algorithm is that there is no need to perform tree join operation during the phase of generating maximal frequent subtrees. Thus, the task of finding maximal frequent subtrees can be significantly simplified comparing to the previous approaches.偏见 发表于 2025-3-31 02:37:14
http://reply.papertrans.cn/47/4696/469574/469574_55.pngHallowed 发表于 2025-3-31 08:41:05
Knowledge Reduction of Rough Set Based on Partitiontribution reduction, assignment reduction and maximum distribution reduction are special cases of partition reduction. We can establish new types of knowledge reduction to meet our requirements based on partition reduction.土坯 发表于 2025-3-31 10:47:45
http://reply.papertrans.cn/47/4696/469574/469574_57.pngmenopause 发表于 2025-3-31 14:40:55
Multi-attributes Image Analysis for the Classification of Web Documents Using Unsupervised Techniquemeaningful clusters. The performance of the system is compared with the Hierarchical Agglomerative Clustering (HAC) algorithm. Evaluation shows that similar images will fall onto the same region in our approach, in such a way that it is possible to retrieve images under family relationships.多骨 发表于 2025-3-31 17:56:22
Automatic Image Annotation Based on Topic-Based Smoothingothed”. In this paper, we present a topic-based smoothing method to overcome the sparseness problems, and integrated with a general image annotation model. Experimental results on 5,000 images demonstrate that our method can achieves significant improvement in annotation effectiveness over an existing method.抚慰 发表于 2025-4-1 00:39:37
Model Trees for Classification of Hybrid Data Typesves the discretization procedure usually necessary for tree construction while decision tree induction itself can deal with nominal attributes which may not be handled well by e.g., SVM methods. Experiments show that our purposed method has better performance than that of other competing learning methods.