找回密码
 To register

QQ登录

只需一步,快速开始

扫一扫,访问微社区

Titlebook: Large Language Models in Cybersecurity; Threats, Exposure an Andrei Kucharavy,Octave Plancherel,Vincent Lenders Book‘‘‘‘‘‘‘‘ 2024 The Edito

[复制链接]
查看: 34322|回复: 51
发表于 2025-3-21 18:30:52 | 显示全部楼层 |阅读模式
书目名称Large Language Models in Cybersecurity
副标题Threats, Exposure an
编辑Andrei Kucharavy,Octave Plancherel,Vincent Lenders
视频video
概述This book is open access, which means that you have free and unlimited access.Provides practitioners with knowledge about inherent cybersecurity risks related to LLMs.Provides methodologies on how to
图书封面Titlebook: Large Language Models in Cybersecurity; Threats, Exposure an Andrei Kucharavy,Octave Plancherel,Vincent Lenders Book‘‘‘‘‘‘‘‘ 2024 The Edito
描述.This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also alerts LLM developers to understand their work‘s risks for cybersecurity and provides them with tools to mitigate those risks...The book starts in Part I with a general introduction to LLMs and their main application areas. Part II collects a description of the most salient threats LLMs represent in cybersecurity, be they as tools for cybercriminals or as novel attack surfaces if integrated into existing software. Part III focuses on attempting to forecast the exposure and the development of technologies and science underpinning LLMs, as well as macro levers available to regulators to further cybersecurity in the age of LLMs. Eventually, in Part IV, mitigation techniques that should allow safe and secure development and deployment of LLMs are presented. The book concludes with two final chapters in Part V, one speculating what a secure design and integration of LLMs from first principl
出版日期Book‘‘‘‘‘‘‘‘ 2024
关键词Open Access; large language models; cybersecurity; cyberdefense; neural networks; societal implications; r
版次1
doihttps://doi.org/10.1007/978-3-031-54827-7
isbn_softcover978-3-031-54829-1
isbn_ebook978-3-031-54827-7
copyrightThe Editor(s) (if applicable) and The Author(s) 2024
The information of publication is updating

书目名称Large Language Models in Cybersecurity影响因子(影响力)




书目名称Large Language Models in Cybersecurity影响因子(影响力)学科排名




书目名称Large Language Models in Cybersecurity网络公开度




书目名称Large Language Models in Cybersecurity网络公开度学科排名




书目名称Large Language Models in Cybersecurity被引频次




书目名称Large Language Models in Cybersecurity被引频次学科排名




书目名称Large Language Models in Cybersecurity年度引用




书目名称Large Language Models in Cybersecurity年度引用学科排名




书目名称Large Language Models in Cybersecurity读者反馈




书目名称Large Language Models in Cybersecurity读者反馈学科排名




单选投票, 共有 0 人参与投票
 

0票 0%

Perfect with Aesthetics

 

0票 0%

Better Implies Difficulty

 

0票 0%

Good and Satisfactory

 

0票 0%

Adverse Performance

 

0票 0%

Disdainful Garbage

您所在的用户组没有投票权限
发表于 2025-3-21 20:56:57 | 显示全部楼层
Conversational Agents evaluation of model output, which is then used for further fine-tuning. Fine-tuning with . (RLHF) models perform better but are resource-intensive and specific for each model. Another critical difference in the performance of various CA is their ability to access auxiliary services for task delegation.
发表于 2025-3-22 02:46:51 | 显示全部楼层
发表于 2025-3-22 05:17:21 | 显示全部楼层
发表于 2025-3-22 09:52:08 | 显示全部楼层
LLM Controls Execution Flow Hijackingtical systems, developing prompt and resulting API calls verification tools, implementing security by designing good practices, and enhancing incident logging and alerting mechanisms can be considered to reduce the novel attack surface presented by LLMs.
发表于 2025-3-22 14:46:35 | 显示全部楼层
发表于 2025-3-22 18:08:43 | 显示全部楼层
发表于 2025-3-23 00:05:40 | 显示全部楼层
发表于 2025-3-23 02:16:12 | 显示全部楼层
Private Information Leakage in LLMs generative AI. This chapter relates the threat of information leakage with other adversarial threats, provides an overview of the current state of research on the mechanisms involved in memorization in LLMs, and discusses adversarial attacks aiming to extract memorized information from LLMs.
发表于 2025-3-23 07:28:05 | 显示全部楼层
 关于派博传思  派博传思旗下网站  友情链接
派博传思介绍 公司地理位置 论文服务流程 影响因子官网 SITEMAP 大讲堂 北京大学 Oxford Uni. Harvard Uni.
发展历史沿革 期刊点评 投稿经验总结 SCIENCEGARD IMPACTFACTOR 派博系数 清华大学 Yale Uni. Stanford Uni.
|Archiver|手机版|小黑屋| 派博传思国际 ( 京公网安备110108008328) GMT+8, 2025-6-27 19:10
Copyright © 2001-2015 派博传思   京公网安备110108008328 版权所有 All rights reserved
快速回复 返回顶部 返回列表