오늘의 짤방: Generative AI in a nutshell via https://blog.crisp.se/
- 빅데이터/인공지능
- Visual Language Models on NVIDIA Hardware with VILA
- 더존비즈온, 세무회계 전문 AI 비서 ‘원AI’ 고도화··· “종합소득세 신고 기능 강화”
- GN⁺: 모든 중국 지도의 오류 (medium.com/@anastasia.bizyayeva)
- Claude, iOS용 앱 공개 (apps.apple.com)
- E2B's Code Interpreter SDK allows you to add code interpreting capabilities to your AI apps.
- “AI가 나이 확인” 독일 공항 편의점, 연령 제한 상품의 셀프서비스 판매에 AI 적용
- StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
- 제미나이, 안드로이드 사용자 경험을 망치러 온 구글의 구원자
- Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting
- Llama 3 70B takes the king’s crown 👑 from GPT-4 Turbo
- 마음AI, 'AI EXPO KOREA 2024'서 한국어 한계 극복한 대형언어모델 '라마 말-허밍버드' 오픈 소스로 공개
- Paint by Inpaint - Learning to Add Image Objects by Removing Them First
- Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs).
- ‘기대와 실상 사이’··· 생성형 AI에 대한 현실 점검
- 마음AI, 한국어 한계 극복한 라마기반 LLM 오픈소스 공개
- Cria - Python으로 간단히 LLM 구동하기 (github.com/leftmove)
- To improve RAG, you generally need to do two things: 1️⃣ improve retrieval, and 2️⃣ improve generation.
- We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG).
- A Hitchhiker’s Guide to Speculative Decoding
- TURNS OUT THAT EXTREMELY IMPRESSIVE SORA DEMO... WASN’T EXACTLY MADE WITH SORA
- Code Llama is a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.
- Automatic Creative Selection with Cross-Modal Matching
- Accelerating Llama3 FP8 Inference with Triton Kernels
- Meta Llama 3 models are now available in Amazon Bedrock
- STT: Stateful Tracking with Transformers for Autonomous Driving
- 배민선물하기 AI 메시지 제작기: 생성 AI가 센스 있는 선물 메시지를 대신 쓰기까지
- Data-Efficient Multimodal Fusion on a Single GPU
- Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring.
- “자율주행으로 시속 250km 포뮬러 경기” 아부다비 A2RL, 자율주행 추월에도 성공
- SUQL stands for Structured and Unstructured Query Language. It augments SQL with several important free text primitives for a precise, succinct, and expressive representation. It can be used to build chatbots for relational data sources that contain both structured and unstructured information
- GPT-4 Can't Reason
- The Anthropic Cookbook provides code and guides designed to help developers build with Claude, providing copy-able code snippets that you can easily integrate into your own projects.
- An evaluation of RAG Retrieval Chunking Methods
- PLLaVA: Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
- RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing
- This is a repository of RALM surveys containing a summary of state-of-the-art RAG and other technologies according to according to our survey paper: RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing .
- STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases
- Amazon Bedrock, Cohere Command R 및 R+ 모델 정식 출시
- EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction (paper, poster)
- Capabilities of Gemini Models in Medicine
- Memary - 자동화 에이전트를 위한 롱텀 메모리 (github.com/kingjulio8238)
- ScrapeGraphAI is a web scraping python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents and XML files.
- 마이크로소프트-오픈AI "GPT-4, 한국어 토큰 효율화 달성"
- Google announces Med-Gemini, a family of Gemini models fine-tuned for medical tasks! 🔬
- The "end game" of LLMs is going to be as follows:
- Not just NVIDIA: GPU programming that runs everywhere
- OpenAI Releases New Fine-Tuning API Features
- ExecuTorch Alpha: Taking LLMs and AI to the Edge with Our Community and Partners
- Assessing The Potential Of Mid-Sized Language Models For Clinical QA
- Retrieval Augmented Generation by Pinecone
- Quantization is quite harmful for LLaMA 3 than for LLaMA 2.
- 메타 "의료 LLM 메디트론, 라마 2 기반으로 의료계 자원 격차 해소 기대"
- Welcome to the ChatGPT API Free Reverse Proxy, offering free self-hosted API access to ChatGPT (gpt-3.5-turbo) with OpenAI's familiar structure, so no code changes are needed.
- Beating Proprietary Models with a Quick Fine-Tune
- Finetuning Embeddings:
- StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation
- A Survey of Generative Search and Recommendation in the Era of Large Language Models
- Checkpoint and restore functionality for CUDA is exposed through a command-line utiity called cuda-checkpoint
- Ollama v0.1.33 - Llama 3 + Phi 3 + Qwen 110B 지원 (github.com/ollama)
- The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:
- "더 바쁘고 깐깐해진 고객 잡아라"...유통가도 AI 도입 사활 - [창간 24주년 특집: GenAI 시대]⑤ 물류부터 쇼핑까지 AI 기술 적극 활용
- [금융 LLM 시리즈 2] 금융 데이터 셋 구축 전략과 학습 예시 공개
- AIOS, a Large Language Model (LLM) Agent operating system, embeds large language model into Operating Systems (OS) as the brain of the OS, enabling an operating system "with soul" -- an important step towards AGI.
- Make Your LLM Fully Utilize the Context
- Lessons after a half-billion GPT tokens
- torchtitan is a proof-of-concept for Large-scale LLM training using native PyTorch.
- Running Python on a serverless GPU instance for machine learning inference
- The 01 Project is building an open-source ecosystem for AI devices. Our flagship operating system can power conversational devices like the Rabbit R1, Humane Pin, or Star Trek computer.
- 2024년 AI 스타트업을 위한 데이터 수집 전략 (press.airstreet.com)
- PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
- LLaVA++: Extending Visual Capabilities with LLaMA-3 and Phi-3
- Visual Instruction Tuning
- AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
- GN⁺: LLM이 결코 할 수 없는 것 (strangeloopcanon.com)
- OpenVoice v2 - 다재다능한 인스턴트 음성 복제 (github.com/myshell-ai)
- Meta Llama 3 발표후, 첫 일주일간 생긴 일 (ai.meta.com)
- Apple, 기기 내 사용을 목표로 하는 8개의 소형 AI 언어 모델 릴리즈 (arstechnica.com)
- Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine
- Multistage RAG with LlamaIndex and Cohere Reranking: A Step-by-Step Guide
- 데미스 하사비스가 말하는 AI — ‘기계에 깃든 정신’ (번역)
- GraphRAG: Unlocking LLM discovery on narrative private data
- Qwen1.5-110B : 알리바바의 오픈소스 LLM Qwen1.5 시리즈의 첫번째 100B+ 모델 (qwenlm.github.io)
- 아는 사람만 쓴다는 퍼플렉시티, 직접 사용해 봤습니다
- Nitro - Embeddable AI: A fast, lightweight 3mb inference server to supercharge apps with local AI.
- Multimodal Search with Snowflake Embedding and MAX Engine
- In RipX DAW, audio is stored in Hit’n’Mix’s revolutionary Rip format, a giant step up from waveforms, enabling full control over all aspects of sound.(RipX DAW PRO AI음악으로 원하는 BGM 쇼츠 음악 10분컷 크리에이터 걱정끝!
- 애플도 오픈소스 AI 대열에 합류··· LLM ‘오픈ELM’ 공개
- A Multimodal Automated Interpretability Agent by MIT CSAIL
- 마이크로소프트, 소형 언어 모델 파이-3 시리즈 발표 "SLM만의 강점 있어"
- Stable Diffusion 3 API now available as Stable Assistant effort looms
- 💬 RepoQA - 🚩The First Benchmark for Long-Context Code Understanding.🚩
- 개선된 어도비 포토샵 AI 기능 직접 써보니…
- 7 Ways to Make Use of Llama-3 for Free
- 구글, AI 집중한 구조 개편 시도 "리서치와 딥마인드 AI 팁 통합"
- LLaVA++: Extending Visual Capabilities with LLaMA-3 and Phi-3
- 롯데멤버스, 자사 디지털 마케팅 플랫폼에 구글 ‘제미나이’ 적용 발표
- “코파일럿에 완전히 매료되지 못한 이유는...” AI 미리 도입한 CIO 3인의 조언
- Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop.
- 생성형 AI로 커지는 IT 지출 · · · “헛돈 쓸 가능성 유의해야”
- Imagenet.int8: Entire Imagenet dataset in 5GB
- 마이크로소프트, 국내 AI 트랜스포메이션 사례 발표
- How to Maximize LLM Performance
- ConsistentID:Portrait Generation with Multimodal Fine-Grained Identity Preserving
- MGM-13B-HD - The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously.
- ‘AI계 구글’ 노리는 퍼플렉시티, 6270만 달러 투자 유치··· SK 외 일본·독일 통신사 협력 강화
- Open-source LLM Ecosystem at Hugging Face
- “제조업 98%, 데이터 문제로 협업 및 생산성 향상에 어려움 겪어” 헥사곤 보고서
- "관건은 데이터"··· 세일즈포스, AI와 데이터 신뢰성 설문조사 발표
- “더 많이 배워 더 똑똑하다” 메타의 최신 LLM ‘라마 3’ 뜯어보기
- '퍼즐 맞춰졌다' 인터넷 필요 없는 코파일럿 나올까
- “구인구직 매칭 서비스에 LLM 활용해보니...” 링크드인이 전하는 생성형 AI 현실 교훈
- DeepLearning.AI's the batch issue 246
- 딥엘, 번역 이어 ‘AI 작문 서비스’ 공식 출시 ··· “범용 LLM 보다 비즈니스 영역에 특화”
- AI ‘쩐의 전쟁’ 최후 승자는?… 미래 성적표에 달렸다
- 스노우플레이크, 엔터프라이즈급 오픈소스 LLM ‘아크틱’ 출시
- 엇갈리는 빅테크 희비… MS·구글 웃고, 메타·인텔은 주가 폭락
- A Recipe for Training Neural Networks by Andrew Karpathy
- XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models.
- LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
- Cohere Toolkit - Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.
- Write Unit Tests for Your Python Code With ChatGPT
- Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot.
- CopilotKit - The Open-Source Copilot Framework: Build, deploy, and operate fully custom AI Copilots.
- IDM-VTON: Improving Diffusion Models for Authentic Virtual Try-on in the Wild
- tokenizers by Huggingface - Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.
- Detect AI Text by Just Looking at it
- A look at the early impact of Meta Llama 3
- Retrieval-Augmented Dual Instruction Tuning (RA-DIT)
- You can use use the mlx-lm package to fine-tune an LLM with low rank adaptation (LoRA) for a target task. The example also supports quantized LoRA (QLoRA)
- Make Your LLM Fully Utilize the Context
- 리디 추천 시스템 Phase 2 – Feature Store 도입기
- T-RAG: Lessons from the LLM Trenches
- MoDE: CLIP Data Experts via Clustering
- Unsloth : Finetune Llama 3 with 2x 빠르고 6x 긴 Context, 68% 적은 VRAM (unsloth.ai)
- The easiest way to get started with LlamaIndex is by using create-llama. This CLI tool enables you to quickly start building a new LlamaIndex application, with everything set up for you.
- CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
- Chat With Product PDF Manuals Using Hybrid Search
- [창간 17주년/기업 생성형AI 현주소①] 10명 중 6명 '만족'...통번역에 주로 활용
- 인터넷도 GPU도 필요없다... MS “경량 AI ‘파이3′, 산업 완전히 바꿀 것” 부베크 MS 생성형 AI 부사장
- Explaining black box machine learning models is critical to gaining leadership's buy-in and trust.
- How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior
- How to Finetune phi-3 on MacBook Pro
- Two decades of recommender systems at Amazon.com
- 숏클립 생성을 위한 하이라이트 검출 기술 개발기
- * TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding
- CoreNet: Apple의 다양한 작업을 위한 포괄적인 심층 신경망 툴킷 (github.com/apple)
- Jina AI Reader - URL을 LLM 친화적인 입력으로 바꿔주는 도구 (github.com/jina-ai)
- Running LLM’s Locally: A Step-by-Step Guide
- Stanford CS25 V4 Transformers course
- LLaMA Board: A One-stop Web UI for Getting Started with LLaMA Factory
- Internist.ai 7b is a medical domain large language model trained by medical doctors to demonstrate the benefits of a physician-in-the-loop approach. The training data was carefully curated by medical doctors to ensure clinical relevance and required quality for clinical practice.
- PyTorch 2.3 Release Blog
- Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI Research Team.
- Cohere Toolkit - Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.
- The “it” in AI models is the dataset.
- 음악감상의 패턴… 강렬한 곡은 ‘여름-낮-男’, 차분한 곡은 ‘겨울-밤-女’[박재혁의 데이터로 보는 세상]
- Model Cards & Prompt formats for Meta Llama 3
- 진격의 오픈소스 LLM...라마3 반응 폭발적·스노우플레이크도 출사표
- Adobe released new features:
- Understanding Large Language Models - A Cross-Section of the Most Relevant Literature To Get Up to Speed
- Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond
- Rules of Machine Learning: Best Practices for ML Engineering(매우 실용적인 교훈이 담겨있음)
- Phi-3 has "only" been trained on 5x fewer tokens than Llama 3 (3.3 trillion instead of 15 trillion)
- CoreNet is a deep neural network toolkit that allows researchers and engineers to train standard and novel small and large-scale models for variety of tasks, including foundation models (e.g., CLIP and LLM), object classification, object detection, and semantic segmentation.
- Penzai is a JAX library for writing models as legible, functional pytree data structures, along with tools for visualizing, modifying, and analyzing them.
- GaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian Splatting
- Detecting and redacting PII using Amazon Bedrock
- Introducing more enterprise-grade features for API customers by OpenAI
- The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
- GN⁺: Phi-3 기술 보고서 (arxiv.org)
- Feature Test for Phi-3-mini-4k-instruct using Llamaindex
- SnapKV: LLM Knows What You are Looking for Before Generation
- DEVELOPING RAPIDLY WITH GENERATIVE AI
- GN⁺: LLM 기술의 금융 시장 활용 (thegradient.pub)
- 메타 "AGI 도약 방법 발견...'트랜스포머'와 다른 아키텍처 개발 중"
- Safely repairing broken builds with ML
- Show GN: Corely AI, 유튜브 영상을 10초 만에 핵심 정리해주는 익스텐션 출시 (크롬, 웨일) (chromewebstore.google.com)
- The Tokenizer Playground - Experiment with different tokenizers (running locally in your browser).
- 🖥️ WebLlama🦙 - Building agents that can browse the web by following instructions and talking to you
- Local RAG with LlamaIndex and Microsoft phi-3 via Ollama
- Multi-Head Mixture-of-Experts
- GN⁺: 구글 검색을 죽인 남자 (wheresyoured.at)
- Graphist - Graphic Design with Large Multimodal Model
- UVA FOUNDATION MODELS COURSE - MSc in Artificial Intelligence for the University of Amsterdam.
- Prompt2Model - Generate Deployable Models from Instructions
- MultiBooth: Towards Generating All Your Concepts in an Image from Text
- SPLATE: Sparse Late Interaction Retrieval
- Mini-LLaMA MLX - A simple implementation of LLaMA 2 that you can run experiments with on your MacBook.
- Mastering Prompt Compression with LLM Lingua: A Deep Dive into Context Optimization
- Rotary Embeddings (RopE) is one of the Fundamental Building Blocks of LlaMA-3 Implementation 🦙
- Penzai - 신경망 구축, 편집, 시각화를 위한 JAX 도구 키트 (github.com/google-deepmind)
- How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study
- Microsoft Introduces Phi-3, LLM That Runs on the Phone
- If you just started playing with Quantized LLMs on local machine and are confused about which model formats to download 🤔- here are some rough points
- This page is a distribution site for the ground-truthed dataset for use in document analysis and recognition experiments.
- TED 2064 - OpenAI Sora로 만든 TED 홍보 영상 (twitter.com/TEDTalks)
- morphic - 챗봇 형태의 검색 엔진
- “자체 모델 구축보다 '챗GPT'나 '라마 2' 호스팅이 저렴...월 750만원선”
- Cosine similarity clearly explained (a go-to metric for vector similarity):
- A Trivial Jailbreak Against Llama 3
- Start making music with AI - In-browser text-to-music generation
- Sparrow is an innovative open-source solution for efficient data extraction and processing from various documents and images. It seamlessly handles forms, invoices, receipts, and other unstructured data sources.
- 글로벌 AI 플랫폼 1위는 챗GPT…학습·코딩도 20위 내 '선전'
- GN⁺: Llama 3 8B는 Wizard 2 8x22B에 필적하는 성능을 보임 (huggingface.co)
- Pretzel - 데이터 탐색/시각화를 위한 오픈소스 오프라인 브라우저 기반 도구 (github.com/pretzelai)
- llmlingua - This great lib from Microsoft can 𝗰𝗼𝗺𝗽𝗿𝗲𝘀𝘀 your prompt massively.
- ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback
- HTML을 LLM (GPT, Gemini, Claude) 에게 전달해서 요약을 부탁하거나 자동화 검색, 웹페이지 자동화 제어를 개발할 때에 전송 비용을 아끼기 위해서 HTML 을 LLM이 읽기 좋게 요약하는 "html-torch" 라는 오픈소스를 개발했습니다.
- Llama3와 LangChain을 이용하여 한국어 Chatbot 만들기
- Cosine Similarity and Text Embeddings In Python with OpenAI
- 한달간 '소라' 테스트한 영화감독 "아직 영화 제작은 무리"
- GN⁺: VASA-1: 한장의 사진과 오디오로 말하는 얼굴 실시간 생성하기 (microsoft.com)
- Using and Finetuning Pretrained Transformers
- Llama 3 는 이전 버전에 비해 검열이 심하지 않음 (ollama.com)
- Meta AI, Llama 3로 업그레이드 되다 (about.fb.com)
- Segment Anything WebGPU - In-browser image segmentation w/ 🤗 Transformers.js
- 마크 주커버그 인터뷰 - Llama 3, 100억달러 모델을 오픈소싱한 이유 (dwarkeshpatel.com)
- Building Meta’s GenAI Infrastructure
- DREAM: Distributed RAG Experimentation Framework
- 하드웨어
- 와이파이 6E 배포를 위한 7가지 팁
- The AI Gadget That Can Make Your Life Better—and Two That Definitely Won’t
- 한국IDC “생성형 AI, 서버 시장 판도 바꿔··· 2028년 국내 서버 시장 4조 7,246억원 규모”
- x86 팬이 퀄컴 스냅드래곤 X 엘리트를 환영하는 이유
- “지난해 국내 서버 시장 5.1% 감소…GPU 투자 쏠림ㆍ공급 지연 원인”
- Axion Processor: Google Announces Its First Arm-Based CPU
- AI PC가 화두?··· 업계 전문가 “관심은 많지만 내년까지 도입 어려울 것”
- GN⁺: TSMC, 백사이드 전력 공급을 도입한 1.6nm 공정 기술 공개 (tomshardware.com)
- “비싼 만큼 값어치 한다” 울트라와이드 모니터의 모든 것
- “애플 워치가 있는 한” 휴대용 AI 기기엔 미래가 없다
- 인텔, 세계 최대 뉴로모픽 시스템 '할라 포인트' 구축 "AI 추론 50배 더 빠르게"
- 모든 유선 전화를 디지털화해야 할 때
- MSX Story - 1
- 2024년 A18 칩의 청사진…얼마나 똑똑해질까?
- 아이폰을 픽셀 8 프로로 바꾼 후 깨달은 5가지
- 블루투스 시장도 경기 침체 직격탄…“2023년 출하량 예상보다 4억 대 적어”
- TSMC races for chip supremacy with new A16 process for AI-ready future
- '갖다 대면 착 붙는' 마그네틱 USB-C 어댑터를 쓰면 안 되는 이유
- Tiny but mighty: The Phi-3 small language models with big potential
- Advanced RAG 10: Corrective Retrieval Augmented Generation (CRAG)
- Imandra gives AI the power of reasoning - Large Language Models (LLMs) use Imandra to build mental models and reason about them, unlocking the incredible potential of generative AI for industries where correctness and compliance matter.
- seemore: Implement a Vision Language Model from Scratch
- 성장형 그래픽 ‘인텔 아크 A-시리즈’, 지난 1년간 발걸음 [권용만의 긱랩]
- Interstellar 8-Track: How Voyager's Vintage Tech Keeps Running
- IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
- LongEmbed - Extending Embedding Models for Long Context Retrieval
- Voyager 1's incredible journey continues after NASA patches code in 46-year-old chip from a distance of 15 billion miles
- 읽을거리
- 요물, 우리를 홀린 고양이 진시 도록
- "만화·웹툰·웹소설만 돈 된다", 끝나지 않는 출판시장 침체기
- KTX는 점점 느려지고 있다
- 2023년 출판시장 통계보고서 발간
- GN⁺: Ozempic, 담배ㆍ제과ㆍ주류 산업 위협하는 게임 체인저 (curingaddiction.substack.com)
- “오픈채팅 사용자 7%, 광고 보고 제품 구매한 적 있다”
- 31살에 전세사기 두 번째…다가구 세입자 위한 나라는 없다
- GN⁺: 애플의 폐쇄적 생태계(walled garden) 붕괴 조짐 (theverge.com)
- 최형광 칼럼 | 초저가 시장의 탄생(feat 차이나 플랫폼)
- 원달러 환율이 빠르게 오르는 비밀
- 2024 대한민국 웰스 리포트 by 하나금융경제연구소
- GN⁺: 아무도 책을 사지 않습니다 (elysian.press) - Penguin vs. 법무부 재판으로 보는 출판업계 인사이트
- 레드테크의 역습…中 '14억 실험실'이 움직인다
- 만약 내 다리가, 3시간 만에 '마비'된다면…[남기자의 체헐리즘]
(보너스: The rule of thumb. A quick guide to approximate measurements using your hand. via @Rainmaker1973)
EOB
댓글 없음:
댓글 쓰기