GVKun编程网logo

operations” warning 是什么导致javac发出“使用未检查或不安全的操作”警告

17

如果您对operations”warning是什么导致javac发出“使用未检查或不安全的操作”警告感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解operations”warning是什么导致

如果您对operations” warning 是什么导致javac发出“使用未检查或不安全的操作”警告感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解operations” warning 是什么导致javac发出“使用未检查或不安全的操作”警告的各种细节,此外还有关于@SuppressWarnings () java warning 警告、angular – Ionic 3- display base64 image,清理不安全的url值safevalue必须使用[property] = binding、AppScan漏洞扫描之-启用了不安全的“OPTIONS”HTTP 方法、DeepLearning - Forard & Backward Propogation的实用技巧。

本文目录一览:

operations” warning 是什么导致javac发出“使用未检查或不安全的操作”警告

operations” warning 是什么导致javac发出“使用未检查或不安全的操作”警告

例如:

javac Foo.javaNote: Foo.java uses unchecked or unsafe operations.Note: Recompile with -Xlint:unchecked for details.

答案1

小编典典

如果你使用的是不带类型说明符的集合(例如,Arraylist()而不是ArrayList<String>()),则在Java 5及更高版本中会出现这种情况。这意味着编译器无法使用泛型以类型安全的方式检查你是否正在使用集合。

要消除警告,只需具体说明你要在集合中存储的对象类型。所以,代替

List myList = new ArrayList();

采用

List<String> myList = new ArrayList<String>();

在Java 7中,你可以使用Type Inference缩短通用实例化。

List<String> myList = new ArrayList<>();

@SuppressWarnings () java warning 警告

@SuppressWarnings () java warning 警告

1. @SuppressWarnings("unchecked")  [^ 抑制单类型的警告]
2. @SuppressWarnings("unchecked","rawtypes") [^ 抑制多类型的警告] 3. @SuppressWarnings("unchecked") [^ 抑制所有类型的警告] *** > 通过源码分析可知@SuppressWarnings其注解目标为类、字段、函数、函数入参、构造函数和函数的局部变量。建议把注解放在最近进警告发生的位置。 下面列举警告关键字:
关键字 用途
all to suppress all warnings (抑制所有警告)
boxing to suppress warnings relative to boxing/unboxing operations (抑制装箱、拆箱操作时候的警告)
cast to suppress warnings relative to cast operations (抑制映射相关的警告)
dep-ann to suppress warnings relative to deprecated annotation (抑制启用注释的警告)
deprecation to suppress warnings relative to deprecation (抑制过期方法警告)
fallthrough to suppress warnings relative to missing breaks in switch statements (抑制确在 switch 中缺失 breaks 的警告)
finally to suppress warnings relative to finally block that don’t return (抑制 finally 模块没有返回的警告)
hiding to suppress warnings relative to locals that hide variable(抑制相对于隐藏变量的局部变量的警告)
incomplete-switch to suppress warnings relative to missing entries in a switch statement (enum case)(忽略没有完整的 switch 语句)
nls to suppress warnings relative to non-nls string literals( 忽略非 nls 格式的字符)
null to suppress warnings relative to null analysis( 忽略对 null 的操作)
rawtypes to suppress warnings relative to un-specific types when using generics on class params( 使用 generics 时忽略没有指定相应的类型)
restriction to suppress warnings relative to usage of discouraged or forbidden references( 抑制禁止使用劝阻或禁止引用的警告)
serial to suppress warnings relative to missing serialVersionUID field for a serializable class( 忽略在 serializable 类中没有声明 serialVersionUID 变量)
static-access to suppress warnings relative to incorrect static access( 抑制不正确的静态访问方式警告)
synthetic-access to suppress warnings relative to unoptimized access from inner classes( 抑制子类没有按最优方法访问内部类的警告)
unchecked to suppress warnings relative to unchecked operations( 抑制没有进行类型检查操作的警告)
unqualified-field-access to suppress warnings relative to field access unqualified( 抑制没有权限访问的域的警告)
unused to suppress warnings relative to unused code( 抑制没被使用过的代码的警告)

angular – Ionic 3- display base64 image,清理不安全的url值safevalue必须使用[property] = binding

angular – Ionic 3- display base64 image,清理不安全的url值safevalue必须使用[property] = binding

我想为个人资料图片显示base64图像.
图像作为二进制数据存储在数据库中,我使用btoa()将这个二进制数据转换为base64.现在我想将这个base64图像绑定到img src
我尝试了很多方法,但它不起作用,请帮忙
这是我的代码

profile.ts代码:

profilePicture(binImage)
{
    if(binImage != null)
    {
        var imageData = btoa(binImage);
        //console.log("Base64 Image: ",imageData);
        this.displayImage = imageData;
    }
}

profile.HTML代码:

<div*ngIf="displayImage">
    <img src="data:Image/*;base64,{{displayImage}}">
</div>

Check this image,it't not displaying the picture

它显示错误“清理不安全的url值safevalue必须使用[property] = binding”

解决方法

在模板中使用之前,添加清洁剂并清理网址

import { DomSanitizer } from '@angular/platform-browser';

...
constructor( private sanitizer: DomSanitizer,.... ) {}
...

profilePicture(binImage)
{
    if(binImage != null)
    {
        var imageData = btoa(binImage);
        //console.log("Base64 Image: ",imageData);
        this.displayImage = this.sanitizer.bypassSecurityTrustUrl("data:Image/*;base64,"+imageData);
    }
}

在你的模板中:

<div*ngIf="displayImage">
    <img src="{{displayImage}}">
</div>

AppScan漏洞扫描之-启用了不安全的“OPTIONS”HTTP 方法

AppScan漏洞扫描之-启用了不安全的“OPTIONS”HTTP 方法

解决方案:

    

Tomcat的web.xml中增加:

<security-constraint>

        <web-resource-collection>

                <web-resource-name>fortune</web-resource-name>

                <url-pattern>/*</url-pattern>

                <http-method>HEAD</http-method>

                <http-method>OPTIONS</http-method>

                <http-method>TRACE</http-method>

        </web-resource-collection>

        <auth-constraint></auth-constraint>

</security-constraint>

 

DeepLearning - Forard & Backward Propogation

DeepLearning - Forard & Backward Propogation

In the previous post I go through basic 1-layer Neural Network with sigmoid activation function, including

  • How to get sigmoid function from a binary classification problem?

  • NN is still an optimization problem, so what''s the target to optimize? - cost function

  • How does model learn?- gradient descent

  • Work flow of NN? - Backward/Forward propagation

Now let''s get deeper to 2-layers Neural Network, from where you can have as many hidden layers as you want. Also let''s try to vectorize everything.

##1. The architecture of 2-layers shallow NN

Below is the architecture of 2-layers NN, including input layer, one hidden layer and one output layer. The input layer is not counted.

2NN.PNG-79.6kB

###(1) Forward propagation In each neuron, there are 2 activities going on after take in the input from previous hidden layer:

  1. a linear transformation of the input
  2. a non-linear activation function applied after

Then the ouput will pass to the next hidden layer as input.

From input layer to output layer, do above computation layer by layer is forward propagation. It tries to map each input $x \in R^n$ to $ y$.

For each training sample, the forward propagation is defined as following:

$x \in R^{n*1}$ denotes the input data. In the picture n = 4.

$(w^{1} \in R^{kn},b^{1}\in R^{k1})$ is the parameter in the first hidden layer. Here k = 3.

$(w^{[2]} \in R^{1k},b^{[2]}\in R^{11})$ is the parameter in the output layer. The output is a binary variable with 1 dimension.

$(z^{1} \in R^{k1},z^{[2]}\in R^{11})$ is the intermediate output after linear transformation in the hidden and output layer.

$(a^{1} \in R^{k1},a^{[2]}\in R^{11})$ is the output from each layer. To make it more generalize we can use $a^{[0]} \in R^n$ to denote $x$

*Here we use $g(x)$ as activation function for hidden layer, and sigmoid $\sigma(x)$ for output layer. we will discuss what are the available activation functions $g(x)$ out there in the following post. What happens in forward propagation is following:

$1$ $z^{1} = {w^{1}} a^{[0]} + b^{1}$ $[2]$ $a^{1} = g((z^{1} ) )$ $[3]$ $z^{[2]} = {w^{[2]}} a^{1} + b^{[2]}$ $[4]$ $a^{[2]} = \sigma(z^{[2]} )$

###(2) Backward propagation After forward propagation, for each training sample $x$ is done ,we will have a prediction $\hat{y}$. Comparing $\hat{y}$ with $y$, we then use the error between prediction and real value to update the parameter via gradient descent.

Backward propagation is passing the gradient descent from output layer back to input layer using chain rule like below. The deduction is in the previous post.

$$ \frac{\partial L(a,y)}{\partial w} = \frac{\partial L(a,y)}{\partial a} \cdot \frac{\partial a}{\partial z} \cdot \frac{\partial z}{\partial w}$$

$[4]$ $dz^{[2]} = a^{[2]} - y$ $[3]$ $dw^{[2]} = dz^{[2]} a^{1T}$ $[3]$ $db^{[2]} = dz^{[2]}$ $[2]$ $dz^{1} = da^{1} * g^{1''}(z1) = w^{[2]T} dz^{[2]}* g^{1''}(z1)$ $1$ $dw^{1} = dz^{1} a^{[0]T}$ $1$ $db^{1} = dz^{1}$

##2. Vectorize and Generalize your NN Let''s derive the vectorize representation of the above forward and backward propagation. The usage of vector is to speed up the computation. We will talk about this again in batch gradient descent.

$w^{1},b^{1}, w^{[2]}, b^{[2]}$ stays the same. Generally $w^{[i]}$ has dimension $(h_{i},h_{i-1})$ and $b^{[i]}$ has dimension $(h_{i},1)$

$Z^{1} \in R^{km}, Z^{[2]} \in R^{1m}, A^{[0]} \in R^{nm}, A^{1} \in R^{km}, A^{[2]}\in R^{1*m}$ where $A^{[0]}$is the input vector, each column is one training sample.

###(1) Forward propogation Follow above logic, vectorize representation is below:

$1$ $Z^{1} = {w^{1}} A^{[0]} + b^{1}$ $[2]$ $A^{1} = g((Z^{1} ) )$ $[3]$ $Z^{[2]} = {w^{[2]}} A^{1} + b^{[2]}$ $[4]$ $A^{[2]} = \sigma(Z^{[2]} )$

Have you noticed that the dimension above is not a exact matched? ${w^{1}} A^{[0]}$ has dimension $(k,m)$, $b^{1}$ has dimension $(k,1)$. However Python will take care of this for you with Broadcasting. Basically it will replicate the lower dimension to the higher dimension. Here $b^{1}$ will be replicated m times to become $(k,m)$

###(1) Backward propogation Same as above, backward propogation will be: $[4]$ $dZ^{[2]} = A^{[2]} - Y$ $[3]$ $dw^{[2]} =\frac{1}{m} dZ^{[2]} A^{1T}$ $[3]$ $db^{[2]} = \frac{1}{m} \sum{dZ^{[2]}}$ $[2]$ $dZ^{1} = dA^{1} * g^{1''}(z1) = w^{[2]T} dZ^{[2]}* g^{1''}(z1)$ $1$ $dw^{1} = \frac{1}{m} dZ^{1} A^{[0]T}$ $1$ $db^{1} = \frac{1}{m} \sum{dZ^{1} }$

In the next post, I will talk about some other details in NN, like hyper parameter, activation function.

To be continued.


Reference

  1. Ian Goodfellow, Yoshua Bengio, Aaron Conrville, "Deep Learning"
  2. Deeplearning.ai https://www.deeplearning.ai/

今天关于operations” warning 是什么导致javac发出“使用未检查或不安全的操作”警告的介绍到此结束,谢谢您的阅读,有关@SuppressWarnings () java warning 警告、angular – Ionic 3- display base64 image,清理不安全的url值safevalue必须使用[property] = binding、AppScan漏洞扫描之-启用了不安全的“OPTIONS”HTTP 方法、DeepLearning - Forard & Backward Propogation等更多相关知识的信息可以在本站进行查询。

本文标签: