Backblaze compute partner CoreWeave is a specialized GPU cloud provider designed to power use cases such as AI/ML, graphics, and rendering up to 35x faster and for 80% less than generalized public clouds. Brandon Jacobs, an infrastructure architect at CoreWeave, joined us earlier this year for Backblaze Tech Day ‘23. Brandon and I co-presented a session explaining both how to backup CoreWeave Cloud storage volumes to Backblaze B2 Cloud Storage and how to load a model from Backblaze B2 into the CoreWeave Cloud inference stack.
Since we recently published an article covering the backup process, in this blog post I’ll focus on loading a large language model (LLM) directly from Backblaze B2 into CoreWeave Cloud.
Below is the session recording from Tech Day; feel free to watch it instead of, or in addition to, reading this article.
More About CoreWeave
In the Tech Day session, Brandon covered the two sides of CoreWeave Cloud:
- Model training and fine tuning.
- The inference service.
To maximize performance, CoreWeave provides a fully-managed Kubernetes environment running on bare metal, with no hypervisors between your containers and the hardware.
CoreWeave provides a range of storage options: storage volumes that can be directly mounted into Kubernetes pods as block storage or a shared file system, running on solid state drives (SSDs) or hard disk drives (HDDs), as well as their own native S3 compatible object storage. Knowing that, you’re probably wondering, “Why bother with Backblaze B2, when CoreWeave has their own object storage?”
The answer echoes the first few words of this blog post—CoreWeave’s object storage is a specialized implementation, co-located with their GPU compute infrastructure, with high-bandwidth networking and caching. Backblaze B2, in contrast, is general purpose cloud object storage, and includes features such as Object Lock and lifecycle rules, that are not as relevant to CoreWeave’s object storage. There is also a price differential. Currently, at $6/TB/month, Backblaze B2 is one-fifth of the cost of CoreWeave’s object storage.
So, as Brandon and I explained in the session, CoreWeave’s native storage is a great choice for both the training and inference use cases, where you need the fastest possible access to data, while Backblaze B2 shines as longer term storage for training, model, and inference data as well as the destination for data output from the inference process. In addition, since Backblaze and CoreWeave are bandwidth partners, you can transfer data between our two clouds with no egress fees, freeing you from unpredictable data transfer costs.
Loading an LLM From Backblaze B2
To demonstrate how to load an archived model from Backblaze B2, I used CoreWeave’s GPT-2 sample. GPT-2 is an earlier version of the GPT-3.5 and GPT-4 LLMs used in ChatGPT. As such, it’s an accessible way to get started with LLMs, but, as you’ll see, it certainly doesn’t pass the Turing test!
This sample comprises two applications: a transformer and a predictor. The transformer implements a REST API, handling incoming prompt requests from client apps, encoding each prompt into a tensor, which the transformer passes to the predictor. The predictor applies the GPT-2 model to the input tensor, returning an output tensor to the transformer for decoding into text that is returned to the client app. The two applications have different hardware requirements—the predictor needs a GPU, while the transformer is satisfied with just a CPU, so they are configured as separate Kubernetes pods, and can be scaled up and down independently.
Since the GPT-2 sample includes instructions for loading data from Amazon S3, and Backblaze B2 features an S3 compatible API, it was a snap to modify the sample to load data from a Backblaze B2 Bucket. In fact, there was just a single line to change, in the s3-secret.yaml
configuration file. The file is only 10 lines long, so here it is in its entirety:
apiVersion: v1 kind: Secret metadata: name: s3-secret annotations: serving.kubeflow.org/s3-endpoint: s3.us-west-004.backblazeb2.com type: Opaque data: AWS_ACCESS_KEY_ID: <my-backblaze-b2-application-key-id> AWS_SECRET_ACCESS_KEY: <my-backblaze-b2-application-key>
As you can see, all I had to do was set the serving.kubeflow.org/s3-endpoint
metadata annotation to my Backblaze B2 Bucket’s endpoint and paste in an application key and its ID.
While that was the only Backblaze B2-specific edit, I did have to configure the bucket and path where my model was stored. Here’s an excerpt from gpt-s3-inferenceservice.yaml
, which configures the inference service itself:
apiVersion: serving.kubeflow.org/v1alpha2 kind: InferenceService metadata: name: gpt-s3 annotations: # Target concurrency of 4 active requests to each container autoscaling.knative.dev/target: "4" serving.kubeflow.org/gke-accelerator: Tesla_V100 spec: default: predictor: minReplicas: 0 # Allow scale to zero maxReplicas: 2 serviceAccountName: s3-sa # The B2 credentials are retrieved from the service account tensorflow: # B2 bucket and path where the model is stored storageUri: s3://<my-bucket>/model-storage/124M/ runtimeVersion: "1.14.0-gpu" ...
Aside from storageUri
configuration, you can see how the predictor application’s pod is configured to scale from between zero and two instances (“replicas” in Kubernetes terminology). The remainder of the file contains the transformer pod configuration, allowing it to scale from zero to a single instance.
Running an LLM on CoreWeave Cloud
Spinning up the inference service involved a kubectl apply
command for each configuration file and a short wait for the CoreWeave GPU cloud to bring up the compute and networking infrastructure. Once the predictor and transformer services were ready, I used curl to submit my first prompt to the transformer endpoint:
% curl -d '{"instances": ["That was easy"]}' http://gpt-s3-transformer-default.tenant-dead0a.knative.chi.coreweave.com/v1/models/gpt-s3:predict {"predictions": ["That was easy for some people, it's just impossible for me,\" Davis said. \"I'm still trying to" ]}
In the video, I repeated the exercise, feeding GPT-2’s response back into it as a prompt a few times to generate a few paragraphs of text. Here’s what it came up with:
“That was easy: If I had a friend who could take care of my dad for the rest of his life, I would’ve known. If I had a friend who could take care of my kid. He would’ve been better for him than if I had to rely on him for everything.
The problem is, no one is perfect. There are always more people to be around than we think. No one cares what anyone in those parts of Britain believes,
The other problem is that every decision the people we’re trying to help aren’t really theirs. If you have to choose what to do”
If you’ve used ChatGPT, you’ll recognize how far LLMs have come since GPT-2’s release in 2019!
Run Your Own Large Language Model
While CoreWeave’s GPT-2 sample is an excellent introduction to the world of LLMs, it’s a bit limited. If you’re looking to get deeper into generative AI, another sample, Fine-tune Large Language Models with CoreWeave Cloud, shows how to fine-tune a model from the more recent EleutherAI Pythia suite.
Since CoreWeave is a specialized GPU cloud designed to deliver best-in-class performance up to 35x faster and 80% less expensive than generalized public clouds, it’s a great choice for workloads such as AI, ML, rendering, and more, and, as you’ve seen in this blog post, easy to integrate with Backblaze B2 Cloud Storage, with no data transfer costs. For more information, contact the CoreWeave team.