高级界面功能

Interface类还有更多内容需要介绍。 本指南涵盖所有高级功能:使用Interpretation 、自定义样式、从Hugging Face Hub加载以及使用ParallelSeries

There's more to cover on the Interface class. This guide covers all the advanced features: Using Interpretation, custom styling, loading from the Hugging Face Hub, and using Parallel and Series.

解读你的预测

大多数模型都是黑盒,因此功能的内部逻辑对最终用户是隐藏的。 为了鼓励透明度,我们通过简单地将 Interface 类中的 interpretation 关键字设置为 default ,使向模型添加解释变得非常容易。 这允许你的用户了解输入的哪些部分负责输出。 看看下面的简单界面,它显示了一个图像分类器,其中还包括解释:

Most models are black boxes such that the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the interpretation keyword in the Interface class to default. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation:

import requests
import tensorflow as tf

import gradio as gr

inception_net = tf.keras.applications.MobileNetV2()  # load the model

# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")


def classify_image(inp):
    inp = inp.reshape((-1, 224, 224, 3))
    inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
    prediction = inception_net.predict(inp).flatten()
    return {labels[i]: float(prediction[i]) for i in range(1000)}


image = gr.Image(shape=(224, 224))
label = gr.Label(num_top_classes=3)

demo = gr.Interface(
    fn=classify_image, inputs=image, outputs=label, interpretation="default"
)

demo.launch()

除了 default 之外,Gradio 还包括基于 Shapley 的解释,它提供了更准确的解释,尽管通常运行时间较慢。 要使用它,只需将 interpretation 参数设置为 "shap" (注意:还要确保安装了 python 包 shap )。 或者,你可以修改 num_shap 参数,该参数控制准确性和运行时间之间的权衡(增加此值通常会提高准确性)。 这是一个例子:

In addition to default, Gradio also includes Shapley-based interpretation, which provides more accurate interpretations, albeit usually with a slower runtime. To use this, simply set the interpretation parameter to "shap" (note: also make sure the python package shap is installed). Optionally, you can modify the num_shap parameter, which controls the tradeoff between accuracy and runtime (increasing this value generally increases accuracy). Here is an example:

gr.Interface(fn=classify_image, inputs=image, outputs=label, interpretation="shap", num_shap=5).launch()

这适用于任何功能,即使在内部,模型是一个复杂的神经网络或其他一些黑盒子。 如果你使用 Gradio 的 defaultshap 解释,则输出组件必须是 Label 。 支持所有常见的输入组件。 这是一个文本输入的例子。

This will work for any function, even if internally, the model is a complex neural network or some other black box. If you use Gradio's default or shap interpretation, the output component must be a Label. All common input components are supported. Here is an example with text input.

import gradio as gr

male_words, female_words = ["he", "his", "him"], ["she", "hers", "her"]


def gender_of_sentence(sentence):
    male_count = len([word for word in sentence.split() if word.lower() in male_words])
    female_count = len(
        [word for word in sentence.split() if word.lower() in female_words]
    )
    total = max(male_count + female_count, 1)
    return {"male": male_count / total, "female": female_count / total}


demo = gr.Interface(
    fn=gender_of_sentence,
    inputs=gr.Textbox(value="She went to his house to get her keys."),
    outputs="label",
    interpretation="default",
)

demo.launch()

那么幕后发生了什么? 使用这些解释方法,Gradio 使用修改后的输入版本多次运行预测。 根据结果​​,你会看到界面会自动突出显示文本(或图像等)中有助于增加类别可能性的部分为红色。 颜色的强度对应于那部分输入的重要性。 降低类置信度的部分以蓝色突出显示。

So what is happening under the hood? With these interpretation methods, Gradio runs the prediction multiple times with modified versions of the input. Based on the results, you'll see that the interface automatically highlights the parts of the text (or image, etc.) that contributed increased the likelihood of the class as red. The intensity of color corresponds to the importance of that part of the input. The parts that decrease the class confidence are highlighted blue.

你也可以编写自己的解释函数。 下面的演示在前面的演示中添加了自定义解释。 此函数将采用与主包装函数相同的输入。 此解释函数的输出将用于突出显示每个输入组件的输入 - 因此该函数必须返回一个列表,其中元素的数量对应于输入组件的数量。 要查看每个输入组件的解释格式,请查看文档。

You can also write your own interpretation function. The demo below adds custom interpretation to the previous demo. This function will take the same inputs as the main wrapped function. The output of this interpretation function will be used to highlight the input of each input component - therefore the function must return a list where the number of elements corresponds to the number of input components. To see the format for interpretation for each input component, check the Docs.

import re

import gradio as gr

male_words, female_words = ["he", "his", "him"], ["she", "hers", "her"]


def gender_of_sentence(sentence):
    male_count = len([word for word in sentence.split() if word.lower() in male_words])
    female_count = len(
        [word for word in sentence.split() if word.lower() in female_words]
    )
    total = max(male_count + female_count, 1)
    return {"male": male_count / total, "female": female_count / total}


# Number of arguments to interpretation function must
# match number of inputs to prediction function
def interpret_gender(sentence):
    result = gender_of_sentence(sentence)
    is_male = result["male"] > result["female"]
    interpretation = []
    for word in re.split("( )", sentence):
        score = 0
        token = word.lower()
        if (is_male and token in male_words) or (not is_male and token in female_words):
            score = 1
        elif (is_male and token in female_words) or (
            not is_male and token in male_words
        ):
            score = -1
        interpretation.append((word, score))
    # Output must be a list of lists containing the same number of elements as inputs
    # Each element corresponds to the interpretation scores for the given input
    return [interpretation]


demo = gr.Interface(
    fn=gender_of_sentence,
    inputs=gr.Textbox(value="She went to his house to get her keys."),
    outputs="label",
    interpretation=interpret_gender,
)

demo.launch()

文档中了解更多关于解释的信息。

Learn more about Interpretation in the docs.

自定义样式

如果你想对演示的任何方面进行更细粒度的控制,你还可以编写自己的 css 或将文件路径传递给 css 文件,使用 Interface 类的 css 参数。

If you'd like to have more fine-grained control over any aspect of your demo, you can also write your own css or pass in a filepath to a css file, with the css parameter of the Interface class.

gr.Interface(..., css="body {background-color: red}")

如果你想在 css 中引用外部文件,请在文件路径(可以是相对路径或绝对路径)前加上 "file=" ,例如:

If you'd like to reference external files in your css, preface the file path (which can be a relative or absolute path) with "file=", for example:

gr.Interface(..., css="body {background-image: url('file=clouds.jpg')}")

警告:自定义 CSS不能保证跨 Gradio 版本工作,因为 Gradio HTML DOM 可能会改变。 我们建议谨慎使用自定义 CSS,而是尽可能使用主题

Warning: Custom CSS is not guaranteed to work across Gradio versions as the Gradio HTML DOM may change. We recommend using custom CSS sparingly and instead using Themes whenever possible.

加载拥抱面模型和空间

Gradio 与Hugging Face Hub完美集成,让你只需一行代码即可加载模型和空间。 要使用它,只需使用 Interface 类中的 load() 方法。 所以:

Gradio integrates nicely with the Hugging Face Hub, allowing you to load models and Spaces with just one line of code. To use this, simply use the load() method in the Interface class. So:

  • 要从 Hugging Face Hub 加载任何模型并围绕它创建一个界面,你可以传递 "model/""huggingface/" 后跟模型名称,如下例所示:

    To load any model from the Hugging Face Hub and create an interface around it, you pass "model/" or "huggingface/" followed by the model name, like these examples:

gr.Interface.load("huggingface/gpt2").launch();
gr.Interface.load("huggingface/EleutherAI/gpt-j-6B", 
    inputs=gr.Textbox(lines=5, label="Input Text")  # customizes the input component
).launch()
  • 要从 Hugging Face Hub 加载任何空间并在本地重新创建它(例如,以便你可以自定义输入和输出),你传递 "spaces/" 后跟模型名称:

    To load any Space from the Hugging Face Hub and recreate it locally (so that you can customize the inputs and outputs for example), you pass "spaces/" followed by the model name:

gr.Interface.load("spaces/eugenesiow/remove-bg", inputs="webcam", title="Remove your webcam background!").launch()

使用 Gradio 加载 Hugging Face 模型或空间的一大好处是,你可以立即使用生成的 Interface 对象,就像 Python 代码中的函数一样(这适用于每种类型的模型/空间:文本、图像、音频、视频,甚至多模态模型):

One of the great things about loading Hugging Face models or spaces using Gradio is that you can then immediately use the resulting Interface object just like function in your Python code (this works for every type of model/space: text, images, audio, video, and even multimodal models):

io = gr.Interface.load("models/EleutherAI/gpt-neo-2.7B")
io("It was the best of times")  # outputs model completion

将界面并联和串联

Gradio 还允许你使用 gradio.Parallelgradio.Series 类非常轻松地混合界面。 Parallel 允许你并行放置两个相似的模型(如果它们具有相同的输入类型)以比较模型预测:

Gradio also lets you mix interfaces very easily using the gradio.Parallel and gradio.Series classes. Parallel lets you put two similar models (if they have the same input type) in parallel to compare model predictions:

generator1 = gr.Interface.load("huggingface/gpt2")
generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")

gr.Parallel(generator1, generator2, generator3).launch()

Series 让你可以将模型和空间串联起来,将一个模型的输出通过管道传输到下一个模型的输入中。

Series lets you put models and spaces in series, piping the output of one model into the input of the next model.

generator = gr.Interface.load("huggingface/gpt2")
translator = gr.Interface.load("huggingface/t5-small")

gr.Series(generator, translator).launch()  # this demo generates text, then translates it to German, and outputs the final result.

当然,只要合适,你也可以将 ParallelSeries 混合在一起!

And of course, you can also mix Parallel and Series together whenever that makes sense!

文档中了解有关并行和串行的更多信息。

Learn more about Parallel and Series in the docs.