That pup looks like they’re into craft beer and bands you won’t have heard of.
That pup looks like they’re into craft beer and bands you won’t have heard of.
Just to add some cool etymology to your reply: the word silhouette comes from a type of affordable portrait made by quickly painting or cutting out a persons profile in black paper. These, and portrait miniatures, fell quickly out of favour with the advent of photography.
The word silhouette is derived from the name of Étienne de Silhouette, a French finance minister who, in 1759, was forced by France’s credit crisis during the Seven Years’ War to impose severe economic demands upon the French people, particularly the wealthy.[3] Because of de Silhouette’s austere economies, his name became synonymous with anything done or made cheaply and so with these outline portraits.[4][5] Prior to the advent of photography, silhouette profiles cut from black card were the cheapest way of recording a person’s appearance.[6][7]
https://en.wikipedia.org/wiki/Silhouette
This is also an interesting article on the subject of pre-photographic portraiture: https://en.m.wikipedia.org/wiki/Portrait_miniature
I read a series of super interesting posts a few months back where someone was exploring the dimensional concept space in LLMs. The jump off point was the discovery of weird glitch tokens which would break GPTs, making them enter a tailspin of nonsense, but the author presented a really interesting deep dive into how concepts are clustered dimensionally, presenting some fascinating examples and, for me at least, explained in a very accessible manner. I don’t know if being able to identify those conceptual clusters of weights means we’re anywhere close to being able to manually tune them, but the series is well worth a read for the curious. There’s also a YouTube series which really dives into the nitty gritty of LLMs, much of which goes over my head, but helped me understand at least the outlines of how the magic happens.
(Excuse any confused terminology here, my knowledge level is interested amateur!)
Posts on glitch tokens and exploring how an LLM encodes concepts in multidimensional space. https://www.lesswrong.com/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology
YouTube series is by 3Blue1Brown - https://m.youtube.com/@3blue1brown
This one is particularly relevant - https://m.youtube.com/watch?v=9-Jl0dxWQs8