查看完整版本 : Computer vision's next breakthrough?

kormer 2023-9-26 07:09

[url]https://www.infoworld.com/article/3706989/computer-visions-next-breakthrough.html[/url]

香港有多少生產商有使用AI進行生產線的QC程序呢?以前可能多用opencv等技術,現在看來不同了,是真的嗎?謝謝。:loveliness:

Zzlaz 2023-9-26 11:10

computer vision應該用deep learning 吧

kormer 2023-9-26 11:53

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-26 11:10 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561608839&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

computer vision應該用deep learning 吧 [/quote]
是CNN

Zzlaz 2023-9-26 12:08

[quote]原帖由 [i]kormer[/i] 於 2023-9-26 11:53 AM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561609816&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

是CNN [/quote]
CNN 係deep learning

kormer 2023-9-26 12:15

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-26 12:08 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561610143&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

CNN 係deep learning [/quote]
是有用呀,用來update參數

kormer 2023-9-26 19:11

何謂synthetic data 呢?有何重要性呢?:loveliness:

Zzlaz 2023-9-26 20:10

[quote]原帖由 [i]kormer[/i] 於 2023-9-26 12:15 PM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561610323&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

是有用呀,用來update參數 [/quote]
tensorflow or pytorch

kormer 2023-9-26 21:10

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-26 20:10 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561620201&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

tensorflow or pytorch [/quote]
how about activation function?

Zzlaz 2023-9-27 00:33

[quote]原帖由 [i]kormer[/i] 於 2023-9-26 09:10 PM 發表 [url=https://computer.discuss.com.hk/redirect.php?goto=findpost&pid=561621516&ptid=31303452][img]https://computer.discuss.com.hk/images/common/back.gif[/img][/url]

how about activation function? [/quote]
兩回事來的
activation function, 一般hidden layer 用relu, output 就用番linear / softmax 之類

kormer 2023-9-27 04:35

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-27 00:33 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561625423&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

兩回事來的
activation function, 一般hidden layer 用relu, output 就用番linear / softmax 之類 [/quote]
那麼怎樣更新可達到減低overfitting出現的情況呢?

Zzlaz 2023-9-27 07:42

[quote]原帖由 [i]kormer[/i] 於 2023-9-27 04:35 AM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561627007&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

那麼怎樣更新可達到減低overfitting出現的情況呢? [/quote]
overfit 又係另一樣野
減低個模型的複雜程度,generalize d lor

e.g. dropout

kormer 2023-9-27 10:36

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-27 07:42 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561627971&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

overfit 又係另一樣野
減低個模型的複雜程度,generalize d lor

e.g. dropout [/quote]
怎樣dropout才可以呢?

Zzlaz 2023-9-27 14:23

[quote]原帖由 [i]kormer[/i] 於 2023-9-27 10:36 AM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561631458&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

怎樣dropout才可以呢? [/quote]
overfitting 因為痴得太近training data 個模型太複雜

kormer 2023-9-27 16:37

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-27 14:23 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561636198&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

overfitting 因為痴得太近training data 個模型太複雜 [/quote]
不明白你的意思 :smile_35:

kormer 2023-9-27 16:39

要drop幾多粒才可以呢?

kormer 2023-9-28 11:40

除了dropout外,還有其他策略可令model準確一點嗎?thanks

Zzlaz 2023-9-29 00:12

[quote]原帖由 [i]kormer[/i] 於 2023-9-27 04:37 PM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561638993&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

不明白你的意思 :smile_35: [/quote]
咁你要讀下machine learning

kormer 2023-9-29 00:36

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-29 00:12 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561672585&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

咁你要讀下machine learning [/quote]
什麼時候需要用machine learning?

kormer 2023-9-29 01:06

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-29 00:12 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561672585&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

咁你要讀下machine learning [/quote]
還有一樣東西啦,什麼是「痴得太近」呢?你是指密集嗎?

還有的是,model複雜度亦可視乎權重的大與小啊。

Zzlaz 2023-9-29 08:10

[quote]原帖由 [i]kormer[/i] 於 2023-9-29 12:36 AM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561672840&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

什麼時候需要用machine learning? [/quote]
無法簡單用程式編寫解決的問題,如computer vision

Zzlaz 2023-9-29 08:12

[quote]原帖由 [i]kormer[/i] 於 2023-9-29 01:06 AM 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561673143&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

還有一樣東西啦,什麼是「痴得太近」呢?你是指密集嗎?

還有的是,model複雜度亦可視乎權重的大與小啊。 [/quote]
所以你去讀下先去問一大堆問題

思而不學則殆

kormer 2023-9-29 10:36

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-29 08:12 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561675584&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

所以你去讀下先去問一大堆問題

思而不學則殆 [/quote]
諗緊sparse同dense的問題啦,你認為是怎樣呢?

[[i] 本帖最後由 kormer 於 2023-9-29 15:20 編輯 [/i]]

kormer 2023-9-29 10:57

L2 regularization and dropout are two popular techniques used in deep learning to reduce the risk of overfitting. Overfitting occurs when a model is too complex or has too many parameters compared to the amount of data it is trained on, leading to poor generalization beyond the training set. By regularizing the model, we reduce the variance without significantly increasing bias, which helps prevent overfitting.

L2 regularization works by adding a penalty term for large weights in our cost function. The magnitude of this penalty term is proportional to the square of each weight coefficient - this prevents small values from becoming too large which can happen during training with gradient descent methods as weights become more extreme over time due to multiple inputs from contributing neurons. Regularizing in this way helps keep all weights relatively small and creates less complexity in our model without adding any structural elements like dropout does.

Dropout on the other hand works by randomly “dropping out” certain neurons while training so they don't contribute their updates toward updating network parameters; these dropped out neurons become disabled during forward propagation, but can be reactivated during backpropagation through what's known as “inverted dropout”. This creates an effective ensemble technique that reduces both variance across models and number of connections between layers since only some neurons will be actively contributing at any given time during inference stages and test cases where models need accurate predictions outside their trainings sets are evaluated efficiently with fewer resources/parameters being used overall resulting in lower computational costs for inference operations. Dropout also serves as a form of feature selection by forcing weak correlations among features - therefore allowing strong connections within neural networks by affecting activation functions’ behavior once multiple layers have been combined together into a network structure capable of performing complex tasks such as image classification and sentiment analysis among others .

In conclusion, L2 regularization helps constrain weight values towards smaller magnitudes while dropouts act more like an ensemble method that allows stronger connections within neural nets via random dropouts that enable better feature selection throughout various datasets for improved accuracy gains when predicting new data points outside its original train set boundaries- ultimately reducing excessive complexity with increased efficiency costs associated with parameter tuning / optimization processes inherent within deep learning architectures themselves making them ideal tools applicable across various machine learning tasks today!

[url]https://www.quora.com/What-is-the-difference-between-L2-regularization-and-dropout-in-neural-networks[/url]

kormer 2023-9-29 11:55

[quote]原帖由 [i]Zzlaz[/i] 於 2023-9-29 08:10 發表 [url=https://www.discuss.com.hk/redirect.php?goto=findpost&pid=561675556&ptid=31303452][img]https://www.discuss.com.hk/images/common/back.gif[/img][/url]

無法簡單用程式編寫解決的問題,如computer vision [/quote]
Machine learning is defined as an automated process that extracts patterns from data. With supervised machine learning, a model of relationship between descriptive features and a target feature based on set of historical instances is learnt automatically.

If no automation involved, the solutions may require a lot of hand-tuning or long lists of rules. In other words, ML could make the task easier to a certain extent.
頁: [1]
查看完整版本: Computer vision's next breakthrough?