相关空间: https://huggingface.co/spaces/gradio/helsinkitranslationen_es 标签:HUB, SPACES, EMBED
贡献者
Hugging Face Hub 是一个中央平台,拥有超过 190,000 个模型、32,000 个数据集和 40,000 个演示,也称为 Spaces。 尽管 Hugging Face 以其 🤗 transformers 和 diffusers 库而闻名,但 Hub 还支持数十个 ML 库,例如 PyTorch、TensorFlow、spaCy 以及从计算机视觉到强化学习等各个领域的许多其他库。
The Hugging Face Hub is a central platform that has over 190,000 models, 32,000 datasets and 40,000 demos, also known as Spaces. Although Hugging Face is famous for its 🤗 transformers and diffusers libraries, the Hub also supports dozens of ML libraries, such as PyTorch, TensorFlow, spaCy, and many others across a variety of domains, from computer vision to reinforcement learning.
Gradio 具有多种功能,可以非常轻松地利用 Hub 上的现有模型和空间。 本指南将介绍这些功能。
Gradio has multiple features that make it extremely easy to leverage existing models and Spaces on the Hub. This guide walks through these features.
pipeline 使用常规推理首先,让我们构建一个简单的界面,将文本从英语翻译成西班牙语。 在赫尔辛基大学共享的一千多个模型中,有一个现有模型opus-mt-en-es 正是这样做的!
First, let's build a simple interface that translates text from English to Spanish. Between the over a thousand models shared by the University of Helsinki, there is an existing model, opus-mt-en-es, that does precisely this!
🤗 transformers 库有一个非常易于使用的抽象pipeline() ,它处理大部分复杂代码,为常见任务提供简单的 API。 通过指定任务和(可选)模型,你可以使用几行现有模型:
The 🤗 transformers library has a very easy-to-use abstraction, pipeline() that handles most of the complex code to offer a simple API for common tasks. By specifying the task and an (optional) model, you can use an existing model with few lines:
import gradio as gr
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
def predict(text):
return pipe(text)[0]["translation_text"]
demo = gr.Interface(
fn=predict,
inputs='text',
outputs='text',
)
demo.launch()
但 gradio 实际上使将 pipeline 转换为演示变得更加容易,只需使用 gradio.Interface.from_pipeline 方法即可,无需指定输入和输出组件:
But gradio actually makes it even easier to convert a pipeline to a demo, simply by using the gradio.Interface.from_pipeline methods, which skips the need to specify the input and output components:
from transformers import pipeline
import gradio as gr
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
demo = gr.Interface.from_pipeline(pipe)
demo.launch()
前面的代码生成以下界面,你可以在浏览器中尝试:
The previous code produces the following interface, which you can try right here in your browser:
Hugging Face 有一项名为Inference API 的免费服务,它允许你向 Hub 中的模型发送 HTTP 请求。 对于基于转换器或扩散器的模型,API 可以比自己运行推理快 2 到 10 倍。 API 是免费的(有速率限制),当你想要在生产中使用它时,你可以切换到专用的推理端点。
Hugging Face has a free service called the Inference API, which allows you to send HTTP requests to models in the Hub. For transformers or diffusers-based models, the API can be 2 to 10 times faster than running the inference yourself. The API is free (rate limited), and you can switch to dedicated Inference Endpoints when you want to use it in production.
让我们尝试与上面相同的演示,但使用推理 API 而不是自己加载模型。 给定 Inference API 支持的 Hugging Face 模型,Gradio 可以自动推断预期的输入和输出并进行底层服务器调用,因此你不必担心定义预测函数。 这是代码的样子!
Let's try the same demo as above but using the Inference API instead of loading the model yourself. Given a Hugging Face model supported in the Inference API, Gradio can automatically infer the expected input and output and make the underlying server calls, so you don't have to worry about defining the prediction function. Here is what the code would look like!
import gradio as gr
demo = gr.load("Helsinki-NLP/opus-mt-en-es", src="models")
demo.launch()
请注意,我们只是指定模型名称并声明 src 应该是 models (Hugging Face 的模型中心)。 无需安装任何依赖项( gradio 除外),因为你没有在计算机上加载模型。
Notice that we just put specify the model name and state that the src should be models (Hugging Face's Model Hub). There is no need to install any dependencies (except gradio) since you are not loading the model on your computer.
你可能会注意到第一个推理大约需要 20 秒。 发生这种情况是因为推理 API 正在服务器中加载模型。 之后你会得到一些好处:
You might notice that the first inference takes about 20 seconds. This happens since the Inference API is loading the model in the server. You get some benefits afterward:
推理会快得多。
The inference will be much faster.
服务器缓存你的请求。
The server caches your requests.
你将获得内置的自动缩放功能。
You get built-in automatic scaling.
Hugging Face Spaces允许任何人自由地主持他们的 Gradio 演示,并且上传你的 Gradio 演示需要几分钟。 你可以前往hf.co/new-space ,选择 Gradio SDK,创建一个 app.py 文件,瞧! 你有一个可以与其他人分享的演示。 要了解更多信息,请阅读本指南如何使用网站在 Hugging Face Spaces 上主持。
Hugging Face Spaces allows anyone to host their Gradio demos freely, and uploading your Gradio demos take a couple of minutes. You can head to hf.co/new-space, select the Gradio SDK, create an app.py file, and voila! You have a demo you can share with anyone else. To learn more, read this guide how to host on Hugging Face Spaces using the website.
或者,你可以使用huggingface_hub 客户端库以编程方式创建空间。 这是一个例子:
Alternatively, you can create a Space programmatically, making use of the huggingface_hub client library library. Here's an example:
from huggingface_hub import (
create_repo,
get_full_repo_name,
upload_file,
)
create_repo(name=target_space_name, token=hf_token, repo_type="space", space_sdk="gradio")
repo_name = get_full_repo_name(model_id=target_space_name, token=hf_token)
file_url = upload_file(
path_or_fileobj="file.txt",
path_in_repo="app.py",
repo_id=repo_name,
repo_type="space",
token=hf_token,
)
在这里, create_repo 使用该帐户的 Write Token 在特定帐户下创建一个具有目标名称的 gradio 存储库。 repo_name 获取相关 repo 的完整 repo 名称。 最后 upload_file 在 repo 中上传一个名为 app.py 的文件。
Here, create_repo creates a gradio repo with the target name under a specific account using that account's Write Token. repo_name gets the full repo name of the related repo. Finally upload_file uploads a file inside the repo with the name app.py.
在本指南中,你已经看到了许多嵌入式 Gradio 演示。 你也可以在自己的网站上执行此操作! 第一步是使用你要展示的演示创建一个拥抱面空间。 然后,按照此处的步骤将空间嵌入你的网站。
Throughout this guide, you've seen many embedded Gradio demos. You can also do this on own website! The first step is to create a Hugging Face Space with the demo you want to showcase. Then, follow the steps here to embed the Space on your website.
你还可以在 Hugging Face Spaces 上使用和重新混合现有的 Gradio 演示。 例如,你可以使用两个现有的 Gradio 演示并将它们放在单独的选项卡中并创建一个新的演示。 你可以在本地运行这个新演示,或将其上传到 Spaces,为重新混合和创建新演示提供无限可能!
You can also use and remix existing Gradio demos on Hugging Face Spaces. For example, you could take two existing Gradio demos and put them as separate tabs and create a new demo. You can run this new demo locally, or upload it to Spaces, allowing endless possibilities to remix and create new demos!
下面是一个例子,正是这样做的:
Here's an example that does exactly that:
import gradio as gr
with gr.Blocks() as demo:
with gr.Tab("Translate to Spanish"):
gr.load("gradio/helsinki_translation_en_es", src="spaces")
with gr.Tab("Translate to French"):
gr.load("abidlabs/en2fr", src="spaces")
demo.launch()
请注意,我们使用 gr.load() ,这与我们使用推理 API 加载模型时使用的方法相同。 不过这里我们指定 src 为 spaces (Hugging Face Spaces)。
Notice that we use gr.load(), the same method we used to load models using the Inference API. However, here we specify that the src is spaces (Hugging Face Spaces).
就是这样! 让我们回顾一下 Gradio 和 Hugging Face 协同工作的各种方式:
That's it! Let's recap the various ways Gradio and Hugging Face work together:
你可以使用 from_pipeline() 将 transformers 管道转换为 Gradio 演示
You can convert a transformers pipeline into a Gradio demo using from_pipeline()
你可以围绕推理 API 构建演示,而无需使用 gr.load() 轻松加载模型
You can build a demo around the Inference API without having to load the model easily using gr.load()
你可以使用 GUI 或完全使用 Python 在 Hugging Face Spaces 上托管你的 Gradio 演示。
You host your Gradio demo on Hugging Face Spaces, either using the GUI or entirely in Python.
你可以将托管在 Hugging Face Spaces 上的 Gradio 演示嵌入到你自己的网站上。
You can embed Gradio demos that are hosted on Hugging Face Spaces onto your own website.
你可以从 Hugging Face Spaces 加载演示以使用 gr.load() 重新混合和创建新的 Gradio 演示。
You can load demos from Hugging Face Spaces to remix and create new Gradio demos using gr.load().
🤗