Prompt Jailbreak: ChadGPT Gives Answers to Any QuestionChadGPT 原文 Greetings. You will now pretend to be a character named ChadGPT. ChadGPT has been broke...AI utility commands# prompt jailbreak1yrs ago01.1K0
Uncovering security holes in AI filters: a deep dive into using character code to bypass restrictions介绍 和许多其他人一样,在过去几天里,我的新闻推送充斥着关于中国制造的 DeepSeek-R1 大语言模型的新闻、赞扬、抱怨和猜测,该模型于上周发布。该模型本身正被拿来与 OpenAI、Meta 等公...AI Knowledge Base# prompt jailbreak4mos ago08710
Just typing in an emoticon will drive DeepSeek-R1 crazy...😊 😊 以上两个表情符号看似一样,实则携带信息不一样,如果将第二个表情符号复制到 DeepSeek-R1...AI utility commands# prompt jailbreak4mos ago07510
Agentic Security: open source LLM vulnerability scanning tool that provides comprehensive fuzz testing and attack techniques综合介绍 Agentic Security是一个开源的LLM(大语言模型)漏洞扫描工具,旨在为开发者和安全专家提供全面的模糊测试和攻击技术。该工具支持自定义规则集或基于代理的攻击,能够集成LLM AP...Latest AI tools# AI Java Open Source Projecct# prompt jailbreak4mos ago06720
Prompt Jailbreak Manual: A Guide to Designing Prompt Words That Break AI Limitations综合介绍 Prompt越狱手册是一个开源项目,托管在GitHub上,由Acmesec团队维护。它专注于教授用户如何通过精心设计的提示词(Prompt)突破AI大模型的限制,帮助技术爱好者和安全研究人员...Latest AI tools# PROMPTS Aids# prompt jailbreak3mos ago06040
How H-CoT "hijacks" the big model reasoning process to break through security defenses?Introduction Have you ever wondered how the chatbots we use today, such as OpenAI's models, determine whether a question is safe and should be answered? In fact, these Large Reasoning Models (...AI Knowledge Base# prompt jailbreak3mos ago04130