“Object segmentation” is nothing new; AI researchers have worked on it for years. But typically, building these models has been a time-consuming process requiring lots of human annotation of images and considerable computing resources. And typically the resulting models were highly specialized to particular use cases. Now though, researchers at Meta have unveiled the Segment Anything Model (SAM), which is able to cut out any object in any scene, regardless of whether it’s seen anything like it before. The model can also do this in response to a variety of different prompts, from text description to mouse clicks or even eye-tracking data. “SAM has learned a general notion of what objects are, and it can generate masks for any object in any image or any video,” the researchers wrote in a blog post. “We believe the possibilities are broad, and we are excited by the many potential use cases we haven’t even imagined yet.”