investigating - 9dian/Index GitHub Wiki

Spark

1.

lang=log
20/07/30 17:36:43 INFO yarn.Client: Application report for application_1525335504172_33279 (state: RUNNING)
20/07/30 17:36:44 INFO yarn.Client: Application report for application_1525335504172_33279 (state: FAILED)
20/07/30 17:36:44 INFO yarn.Client:
         client token: N/A
         diagnostics: Application application_1525335504172_33279 failed 2 times due to AM Container for appattempt_1525335504172_33279_000002 exited with  exitCode: -104
For more detailed output, check application tracking page:http://**101:8088/proxy/application_1525335504172_33279/Then, click on links to logs of each attempt.
Diagnostics: Container [pid=17340,containerID=container_1525335504172_33279_02_000001] is running beyond physical memory limits. Current usage: 1.5 GB of 1.5 GB physical memory used; 3.7 GB of 3.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1525335504172_33279_02_000001 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 17344 17340 17340 17340 (java) 745462 48843 3837046784 395078 /usr/java/jdk1.8.0_121/bin/java -server -Xmx1024m -Djava.io.tmpdir=/cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/tmp -Dspark.yarn.app.container.log.dir=/cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class aai.Kafka2HiverSpeakerDataLoader --jar file:/cdhdata/aipdc/aip-rt-speaker-jar-with-dependencies.jar --arg --files --arg /cdhdata/aipdc/speaker.conf --arg --conf --arg spark.driver.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf --arg --conf --arg spark.executor.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf --arg --conf --arg hive.exec.max.dynamic.partitions=10000 --properties-file /cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/__spark_conf__/__spark_conf__.properties
        |- 17340 17338 17340 17340 (bash) 0 0 116080640 259 /bin/bash -c LD_LIBRARY_PATH="/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/../../../CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/lib/native::/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/lib/native" /usr/java/jdk1.8.0_121/bin/java -server -Xmx1024m -Djava.io.tmpdir=/cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/tmp -Dspark.yarn.app.container.log.dir=/cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'aai.Kafka2HiverSpeakerDataLoader' --jar file:/cdhdata/aipdc/aip-rt-speaker-jar-with-dependencies.jar --arg '--files' --arg '/cdhdata/aipdc/speaker.conf' --arg '--conf' --arg 'spark.driver.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf' --arg '--conf' --arg 'spark.executor.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf' --arg '--conf' --arg 'hive.exec.max.dynamic.partitions=10000' --properties-file /cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/__spark_conf__/__spark_conf__.properties 1> /cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001/stdout 2> /cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.users.hive
         start time: 1596078877600
         final status: FAILED
         tracking URL: http://**101:8088/cluster/app/application_1525335504172_33279
         user: hive
20/07/30 17:36:44 INFO yarn.Client: Deleted staging directory hdfs://**101:8020/user/hive/.sparkStaging/application_1525335504172_33279
20/07/30 17:36:44 ERROR yarn.Client: Application diagnostics message: Application application_1525335504172_33279 failed 2 times due to AM Container for appattempt_1525335504172_33279_000002 exited with  exitCode: -104
For more detailed output, check application tracking page:http://**101:8088/proxy/application_1525335504172_33279/Then, click on links to logs of each attempt.
Diagnostics: Container [pid=17340,containerID=container_1525335504172_33279_02_000001] is running beyond physical memory limits. Current usage: 1.5 GB of 1.5 GB physical memory used; 3.7 GB of 3.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1525335504172_33279_02_000001 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 17344 17340 17340 17340 (java) 745462 48843 3837046784 395078 /usr/java/jdk1.8.0_121/bin/java -server -Xmx1024m -Djava.io.tmpdir=/cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/tmp -Dspark.yarn.app.container.log.dir=/cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class aai.Kafka2HiverSpeakerDataLoader --jar file:/cdhdata/aipdc/aip-rt-speaker-jar-with-dependencies.jar --arg --files --arg /cdhdata/aipdc/speaker.conf --arg --conf --arg spark.driver.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf --arg --conf --arg spark.executor.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf --arg --conf --arg hive.exec.max.dynamic.partitions=10000 --properties-file /cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/__spark_conf__/__spark_conf__.properties
        |- 17340 17338 17340 17340 (bash) 0 0 116080640 259 /bin/bash -c LD_LIBRARY_PATH="/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/../../../CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/lib/native::/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop/lib/native" /usr/java/jdk1.8.0_121/bin/java -server -Xmx1024m -Djava.io.tmpdir=/cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/tmp -Dspark.yarn.app.container.log.dir=/cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'aai.Kafka2HiverSpeakerDataLoader' --jar file:/cdhdata/aipdc/aip-rt-speaker-jar-with-dependencies.jar --arg '--files' --arg '/cdhdata/aipdc/speaker.conf' --arg '--conf' --arg 'spark.driver.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf' --arg '--conf' --arg 'spark.executor.extraJavaOptions=-Dconfig.file=/cdhdata/aipdc/speaker.conf' --arg '--conf' --arg 'hive.exec.max.dynamic.partitions=10000' --properties-file /cdhdata/yarn/nm/usercache/hive/appcache/application_1525335504172_33279/container_1525335504172_33279_02_000001/__spark_conf__/__spark_conf__.properties 1> /cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001/stdout 2> /cdhdata/yarn/container-logs/application_1525335504172_33279/container_1525335504172_33279_02_000001/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1525335504172_33279 finished with failed status
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1171)
        at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1608)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/07/30 17:36:44 INFO util.ShutdownHookManager: Shutdown hook called
20/07/30 17:36:44 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-79eb7a14-689d-46ba-a8b1-4b55693f84e4
20/07/30 17:36:44 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-1c384a08-6bc0-4c18-825e-07f88d926252

原因:

Diagnostics: Container [pid=17340,containerID=container_1525335504172_33279_02_000001] is running beyond physical memory limits. Current usage: 1.5 GB of 1.5 GB physical memory used; 3.7 GB of 3.1 GB virtual memory used. Killing container.

错误分析

1.5 GB:任务所占的物理内存 1.5 GB: mapreduce.map.memory.mb 参数默认设置大小 3.7 GB:程序占用的虚拟内存 3.1 GB: mapreduce.map.memory.mb 乘以 yarn.nodemanager.vmem-pmem-ratio 得到的

其中 yarn.nodemanager.vmem-pmem-ratio 是 虚拟内存和物理内存比例,在yarn-site.xml中设置,默认是2.1

很明显,container占用了2.8G的虚拟内存,但是分配给container的却只有2.1GB。所以kill掉了这个container

上面只是map中产生的报错,当然也有可能在reduce中报错,如果是reduce中,那么就是mapreduce.reduce.memory.db * yarn.nodemanager.vmem-pmem-ratio

注: 物理内存:真实的硬件设备(内存条) 虚拟内存:利用磁盘空间虚拟出的一块逻辑内存,用作虚拟内存的磁盘空间被称为交换空间(Swap Space)。(为了满足物理内存的不足而提出的策略) linux会在物理内存不足时,使用交换分区的虚拟内存。内核会将暂时不用的内存块信息写到交换空间,这样以来,物理内存得到了释放,这块内存就可以用于其它目的,当需要用到原始的内容时,这些信息会被重新从交换空间读入物理内存。

解决办法

  1. 取消虚拟内存的检查(不建议): 在yarn-site.xml或者程序中中设置yarn.nodemanager.vmem-check-enabled为false
   <property>  
     <name>yarn.nodemanager.vmem-check-enabled</name>  
     <value>false</value>  
     <description>Whether virtual memory limits will be enforced for containers.</description>  
   </property>  
  1. 除了虚拟内存超了,也有可能是物理内存超了,同样也可以设置物理内存的检查为 yarn.nodemanager.pmem-check-enabled :false 个人认为这种办法并不太好,如果程序有内存泄漏等问题,取消这个检查,可能会导致集群崩溃。

  2. 增大mapreduce.map.memory.mb 或者 mapreduce.reduce.memory.mb (建议) 个人觉得是一个办法,应该优先考虑这种办法,这种办法不仅仅可以解决虚拟内存,或许大多时候都是物理内存不够了,这个办法正好适用

  3. 适当增大 yarn.nodemanager.vmem-pmem-ratio的大小,为物理内存增大对应的虚拟内存, 但是这个参数也不能太离谱

  4. 如果任务所占用的内存太过离谱,更多考虑的应该是程序是否有内存泄漏,是否存在数据倾斜等,优先程序解决此类问题

2. 关于Spark Streaming数据输出多次重写及其解决方案:

原因:

为什么会有这个问题,因为Spark Streaming在计算的时候基于Spark Core天生会做以下事情导致Spark Streaming的结果(部分)重复输出。Task重试,慢任务推测,Stage重试,Job重试。

具体解决方案:

  1. 设置spark.task.maxFailures次数为1,这样就不会有Task重试了。设置spark.speculation为关闭状态,就不会有慢任务推测了,因为慢任务推测非常消耗性能,所以关闭后可以显著提高Spark Streaming处理性能。

  2. Spark Streaming On Kafka的话,Job失败后可以设置Kafka的参数auto.offset.reset为largest方式。

最后再次强调可以通过transform和foreachRDD基于业务逻辑代码进行逻辑控制来实现数据不重复消费和输出不重复。这两个方法类似于Spark Streaming的后门,可以做任意想象的控制操作。

3 logging

checkpoint

20/08/03 10:30:27 INFO streaming.CheckpointWriter: Deleting hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421797000
20/08/03 10:30:27 INFO streaming.CheckpointWriter: Checkpoint for time 1596421827000 ms saved to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421827000', took 8189 bytes and 20 ms
20/08/03 10:30:30 INFO internals.Fetcher: [Consumer clientId=consumer-1, groupId=iot01] Resetting offset for partition datarev_0x1c_state_prod_iot-1 to offset 84851284.
20/08/03 10:30:30 INFO internals.Fetcher: [Consumer clientId=consumer-1, groupId=iot01] Resetting offset for partition datarev_0x1c_state_prod_iot-0 to offset 84852374.
20/08/03 10:30:30 INFO scheduler.JobScheduler: Added jobs for time 1596421830000 ms
20/08/03 10:30:30 INFO scheduler.JobGenerator: Checkpointing graph for time 1596421830000 ms
20/08/03 10:30:30 INFO streaming.DStreamGraph: Updating checkpoint data for time 1596421830000 ms
20/08/03 10:30:30 INFO streaming.DStreamGraph: Updated checkpoint data for time 1596421830000 ms
20/08/03 10:30:30 INFO streaming.CheckpointWriter: Submitted checkpoint of time 1596421830000 ms to writer queue
20/08/03 10:30:30 INFO streaming.CheckpointWriter: Saving checkpoint for time 1596421830000 ms to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421830000'
20/08/03 10:30:30 INFO streaming.CheckpointWriter: Deleting hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421800000
20/08/03 10:30:30 INFO streaming.CheckpointWriter: Checkpoint for time 1596421830000 ms saved to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421830000', took 8217 bytes and 20 ms
20/08/03 10:30:33 INFO internals.Fetcher: [Consumer clientId=consumer-1, groupId=iot01] Resetting offset for partition datarev_0x1c_state_prod_iot-1 to offset 84851377.
20/08/03 10:30:33 INFO internals.Fetcher: [Consumer clientId=consumer-1, groupId=iot01] Resetting offset for partition datarev_0x1c_state_prod_iot-0 to offset 84852467.
20/08/03 10:30:33 INFO scheduler.JobScheduler: Added jobs for time 1596421833000 ms
20/08/03 10:30:33 INFO scheduler.JobGenerator: Checkpointing graph for time 1596421833000 ms
20/08/03 10:30:33 INFO streaming.DStreamGraph: Updating checkpoint data for time 1596421833000 ms
20/08/03 10:30:33 INFO streaming.DStreamGraph: Updated checkpoint data for time 1596421833000 ms
20/08/03 10:30:33 INFO streaming.CheckpointWriter: Submitted checkpoint of time 1596421833000 ms to writer queue
20/08/03 10:30:33 INFO streaming.CheckpointWriter: Saving checkpoint for time 1596421833000 ms to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421833000'
20/08/03 10:30:33 INFO streaming.CheckpointWriter: Deleting hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421803000
20/08/03 10:30:33 INFO streaming.CheckpointWriter: Checkpoint for time 1596421833000 ms saved to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421833000', took 8282 bytes and 156 ms
20/08/03 10:30:36 INFO internals.Fetcher: [Consumer clientId=consumer-1, groupId=iot01] Resetting offset for partition datarev_0x1c_state_prod_iot-1 to offset 84851596.
20/08/03 10:30:36 INFO internals.Fetcher: [Consumer clientId=consumer-1, groupId=iot01] Resetting offset for partition datarev_0x1c_state_prod_iot-0 to offset 84852686.
20/08/03 10:30:36 INFO scheduler.JobScheduler: Added jobs for time 1596421836000 ms
20/08/03 10:30:36 INFO scheduler.JobGenerator: Checkpointing graph for time 1596421836000 ms
20/08/03 10:30:36 INFO streaming.DStreamGraph: Updating checkpoint data for time 1596421836000 ms
20/08/03 10:30:36 INFO streaming.DStreamGraph: Updated checkpoint data for time 1596421836000 ms
20/08/03 10:30:36 INFO streaming.CheckpointWriter: Submitted checkpoint of time 1596421836000 ms to writer queue
20/08/03 10:30:36 INFO streaming.CheckpointWriter: Saving checkpoint for time 1596421836000 ms to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421836000'
20/08/03 10:30:36 INFO streaming.CheckpointWriter: Deleting hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421806000
20/08/03 10:30:36 INFO streaming.CheckpointWriter: Checkpoint for time 1596421836000 ms saved to file 'hdfs://**101:8020/tmp/hive/iot/speaker_online/checkpoint-1596421836000', took 8324 bytes and 19 ms

insertInto (Dataset)

 User class threw exception: org.apache.spark.sql.AnalysisException: insertInto() can't be used together with partitionBy(). Partition columns have already been defined for the table. It is not necessary to use partitionBy().;
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:318)
at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:311)
at aai.Kafka2HiverSpeakerDataLoader.lambda$main$bcc85ede$1(Kafka2HiverSpeakerDataLoader.java:167)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) 

3. yarn warn log

Scheduling Delay: time taken by streaming scheduler to submit jobs of a batch 流调度程序提交批处理作业所花费的时间


20/08/03 12:31:44 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
20/08/03 12:31:48 WARN kafka010.KafkaUtils: overriding enable.auto.commit to false for executor
20/08/03 12:31:48 WARN kafka010.KafkaUtils: overriding auto.offset.reset to none for executor
20/08/03 12:31:48 WARN kafka010.KafkaUtils: overriding executor group.id to spark-executor-iot00
20/08/03 12:31:48 WARN kafka010.KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
20/08/03 12:31:48 WARN streaming.StreamingContext: Dynamic Allocation is enabled for this application. Enabling Dynamic allocation for Spark Streaming applications can cause data loss if Write Ahead Log is not enabled for non-replayable sources like Flume. See the programming guide for details on how to enable the Write Ahead Log.
20/08/03 12:40:12 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.

5. 特征和标签

用户能够设置的功能 T=(模式,温度,风速、扫风,湿度,空气净化,持续时间) V=(地域,人口,室内温度,室外温度,室内湿度,室外湿度,PM2.5)

机器学习系统通过学习如何组合输入信息来对从未见过的数据做出有用的预测。 下面我们来了解一下机器学习的基本术语。

标签 标签是我们要预测的事物,即简单线性回归中的 y 变量。标签可以是小麦未来的价格、图片中显示的动物品种、音频剪辑的含义或任何事物。

户喜欢的鞋子是一种实用标签。 喜好不是可观察且可量化的指标。我们能做到最好的就是针对用户的喜好来搜索可观察的代理指标。

特征 特征是输入变量,即简单线性回归中的 x 变量。简单的机器学习项目可能会使用单个特征,而比较复杂的机器学习项目可能会使用数百万个特征,按如下方式指定:

鞋的美观程度是一项实用特征。 合适的特征应该是具体且可量化的。美观程度是一种过于模糊的概念,不能作为实用特征。美观程度可能是某些具体特征(例如样式和颜色)的综合表现。样式和颜色都比美观程度更适合用作特征。

在垃圾邮件检测器示例中,特征可能包括:

电子邮件文本中的字词 发件人的地址 发送电子邮件的时段 电子邮件中包含“一种奇怪的把戏”这样的短语。

样本 样本是指数据的特定实例:x。(我们采用粗体 x 表示它是一个矢量。)我们将样本分为以下两类:

有标签样本 无标签样本 有标签样本同时包含特征和标签。即:

labeled examples: {features, label}: (x, y) 我们使用有标签样本来训练模型。在我们的垃圾邮件检测器示例中,有标签样本是用户明确标记为“垃圾邮件”或“非垃圾邮件”的各个电子邮件。

例如,下表显示了从包含加利福尼亚州房价信息的数据集中抽取的 5 个有标签样本:

housingMedianAge (特征) totalRooms (特征) totalBedrooms (特征) medianHouseValue (标签) 15 5612 1283 66900 19 7650 1901 80100 17 720 174 85700 14 1501 337 73400 20 1454 326 65500 无标签样本包含特征,但不包含标签。即:

unlabeled examples: {features, ?}: (x, ?) 以下是取自同一住房数据集的 3 个无标签样本,其中不包含 medianHouseValue:

housingMedianAge (特征) totalRooms (特征) totalBedrooms (特征) 42 1686 361 34 1226 180 33 1077 271 在使用有标签样本训练模型之后,我们会使用该模型预测无标签样本的标签。在垃圾邮件检测器示例中,无标签样本是用户尚未添加标签的新电子邮件。

模型 模型定义了特征与标签之间的关系。例如,垃圾邮件检测模型可能会将某些特征与“垃圾邮件”紧密联系起来。我们来重点介绍一下模型生命周期的两个阶段:

训练是指创建或学习模型。也就是说,向模型展示有标签样本,让模型逐渐学习特征与标签之间的关系。

推断是指将训练后的模型应用于无标签样本。也就是说,使用经过训练的模型做出有用的预测 (y')。例如,在推断期间,您可以针对新的无标签样本预测 medianHouseValue。

回归与分类 回归模型可预测连续值。例如,回归模型做出的预测可回答如下问题:

加利福尼亚州一栋房产的价值是多少?

用户点击此广告的概率是多少?

分类模型可预测离散值。例如,分类模型做出的预测可回答如下问题:

某个指定电子邮件是垃圾邮件还是非垃圾邮件?

这是一张狗、猫还是仓鼠图片?

tf.train.exponential_decay( learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None )

 # 学习率迭代公式:
 decayed_learning_rate = learning_rate *
                         decay_rate ^ (global_step / decay_steps)
  
  
 global_step=0,则依次从1,2,3,...,到decay_steps,注意decay_rate大于1就是学习率递增,小于1递减;而且(global_step / decay_steps)的值是作为decay_rate的指数哦,不是相乘。
 所以从global_step=0 到 decay_steps学习率都会递减。超出decay_steps就不会再递减了。
 如果把train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_steps)后面部分的global_step=global_steps去掉,global_step的自动加一就会失效。
 decay_steps 用来控制衰减速度,如果 decay_steps 大一些, (global_step / decay_steps)的值就会增长的小一些,这样学习率更新的速度就会慢一些。
 例:
 # 初始学习率
 learning_rate = 0.01
 # 衰减系数
 decay_rate = 0.96
 decay_steps = 100
 # 迭代轮数
 global_steps = 10000
 此时的意思是学习率以基数0.96每100步进行衰减。例如当迭代次数从1到10000次时,迭代到最后一次时,10000/100=100. 则衰减到基数0.96的100次方。即此时的学习率为0.01*0.96^100 已经基本等于0了

“计算参数更新”。机器学习系统就是在此部分检查损失函数的值,并为 b 和 w1 生成新值。 现在,假设这个神秘的绿色框会产生新值,然后机器学习系统将根据所有标签重新评估所有特征,为损失函数生成一个新值,而该值又产生新的参数值。这种学习过程会持续迭代,直到该算法发现损失可能最低的模型参数。通常,您可以不断迭代,直到总体损失不再变化或至少变化极其缓慢为止。这时候,我们可以说该模型已收敛。

超参数: 是编程人员在机器学习算法中用于调整的旋钮,花费大量时间来调整学习速率

y-prime: prediction(x) 与实际y的偏差

在梯度下降法中,批量指的是用于在单次迭代中计算梯度的样本总数。到目前为止,我们一直假定批量是指整个数据集。就 Google 的规模而言,数据集通常包含数十亿甚至数千亿个样本。此外,Google 数据集通常包含海量特征。因此,一个批量可能相当巨大。如果是超大批量,则单次迭代就可能要花费很长时间进行计算。

包含随机抽样样本的大型数据集可能包含冗余数据。实际上,批量大小越大,出现冗余的可能性就越高。一些冗余可能有助于消除杂乱的梯度,但超大批量所具备的预测价值往往并不比大型批量高。

如果我们可以通过更少的计算量得出正确的平均梯度,会怎么样?通过从我们的数据集中随机选择样本,我们可以通过小得多的数据集估算(尽管过程非常杂乱)出较大的平均值。 随机梯度下降法 (SGD) 将这种想法运用到极致,它每次迭代只使用一个样本(批量大小为 1)。如果进行足够的迭代,SGD 也可以发挥作用,但过程会非常杂乱。“随机”这一术语表示构成各个批量的一个样本都是随机选择的。

小批量随机梯度下降法(小批量 SGD)是介于全批量迭代与 SGD 之间的折衷方案。小批量通常包含 10-1000 个随机选择的样本。小批量 SGD 可以减少 SGD 中的杂乱样本数量,但仍然比全批量更高效。

为了简化说明,我们只针对单个特征重点介绍了梯度下降法。请放心,梯度下降法也适用于包含多个特征的特征集。

另请注意从 x1 和 x2 到模型直观图示之间的线。这些线的权重表示模型中相应特征的权重。也就是说,线越粗,权重越高。

TensorFlow 由以下两个组件组成:

图协议缓冲区 执行(分布式)图的运行时 这两个组件类似于 Java 编译器和 JVM。正如 JVM 会实施在多个硬件平台(CPU 和 GPU)上一样,TensorFlow 也是如此。

您应该使用哪个 API?您应该使用能够解决问题的最高级抽象层。较高级别的抽象层更易于使用,但(设计方面)不够灵活。我们建议您先从最高级 API 入手,让所有组件正常运作起来。如果您希望在某些特殊建模方面能够更加灵活一些,则可以降低一个级别。请注意,每个级别都是使用低级 API 构建的,因此降低层次结构级别应该比较直观。

Java

FormImageFilesAndTextsUpload

convert the below Java code to Charp code

import java.io.*;
import java.net.*;

public class FormImageFilesAndTextsUpload {
    public static void main(String[] args) {
                try {
            String url = "http://10.27.122.101:8001/uploadfiles/";
            String charset = "UTF-8";

            URLConnection connection = new URL(url).openConnection();
            connection.setDoOutput(true);
            connection.setDoInput(true);
            connection.setUseCaches(false); // post方式不能使用缓存
            // 设置请求头信息
            connection.setRequestProperty("Connection", "Keep-Alive");
            connection.setRequestProperty("Charset", charset);
            // 设置边界
            String BOUNDARY = "----------" + System.currentTimeMillis();
            connection.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + BOUNDARY);

            OutputStream output = connection.getOutputStream();

            // 将图片文件二进制内容传到URLConnection的输出流
            String[] fileNames = {"E:\\11.png", "E:\\22.png"};
            for (String fileName : fileNames) {
                File file = new File(fileName);
                // 请求body中每个文件头部信息
                // 第一部分:
                StringBuilder buff = new StringBuilder();
                buff.append("--"); // 必须多两道线
                buff.append(BOUNDARY);
                buff.append("\r\n");
                buff.append("Content-Disposition: form-data;name=\"files\";filename=\"" + file.getName() + "\"\r\n");
                buff.append("Content-Type:image/png\r\n\r\n");
                byte[] fileHead = buff.toString().getBytes(charset);
                // 输出表头
                output.write(fileHead);

                FileInputStream in = new FileInputStream(file);
                // 文件正文部分,把文件已流文件的方式 推入到url中
                DataInputStream dataIn = new DataInputStream(in);
                int bytes = 0;
                byte[] bufferOut = new byte[1024];
                while ((bytes = dataIn.read(bufferOut)) != -1) {
                    output.write(bufferOut, 0, bytes);
                }
                in.close();

                // 写入两个文件之间得分隔符,如果没有两个文件内容会被写在一个文件中
                output.write("\r\n".getBytes(charset));
            }

            // 文件后的sns参数
            String [] sns = {"0A0B3%13ZR10200602+21121701", "0A0B3%13ZR10200602+21121702"};
            String snSep = "";
            StringBuilder snBuf = new StringBuilder();
            for (String sn : sns) {
                snBuf.append(snSep);
                snBuf.append("--"); // 必须多两道线
                snBuf.append(BOUNDARY);
                snBuf.append("\r\n");
                snBuf.append("Content-Disposition: form-data; name=\"sns\"").append(CRLF);
                snBuf.append("Content-Type: text/plain; charset=" + charset).append(CRLF).append(CRLF);
                snBuf.append(sn);
                snSep = CRLF;
            }
            output.write(snBuf.toString().getBytes("utf-8"));

            // 结尾部分
            byte[] foot = ("\r\n--" + BOUNDARY + "--\r\n").getBytes(charset);// 定义最后数据分隔线
            output.write(foot);
            output.flush();

            output.close();

            // 读取返回数据    
            StringBuffer strBuf = new StringBuffer();  
            BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));  
            String line = null;  
            while ((line = reader.readLine()) != null) {  
                strBuf.append(line).append("\n");  
            }  
            System.out.println("文件上传返回信息" +strBuf.toString());  
            reader.close();  
            // connection.disconnect();  
            // connection = null;  
        }
        catch(IOException ioe) {
            ioe.printStackTrace();
        }
    }
	
    private static String CRLF = "\r\n";
}	
⚠️ **GitHub.com Fallback** ⚠️