Good thoughts in 2023
Last updated on：a month ago
There are good thoughts accumulated in 2023.
1) Introduce data augmentation in rotjigsaw
Similar to Gaussian noise, random resize crop
1) Literature review: MAE, others, check if anyone aleady done this
2) Data augmentation design can reference MAE
3) Slice and merge schemes
4) recover to a larger featuremap is workable?
1) Slice, select, sew, resize, encode, decode, similarity loss for ori and output
2) Slice, select, sew, encode, decode, resize, similarity loss for ori and output
needs a considerable number of experiemnts, which we can’t afford now
1) layer sleep Efficient Self-supervised Continual Learning with Progressive Task-correlated Layer Freezing
2) channel sleep
3) kernel sleep Rethinking 1×1 Convolutions: Can we train CNNs with Frozen Random Filters?
4) instance segmentation on COCO
5) Design a new decoder for mixupmask (a question: what kinds of decoder can be transferred?)
6) Step weight freezing in transfer learning (difficult to ensure how many layers needed to be frozen, most of the datasets I used fall in “small” scale region)
1) Do BYOL, SAWV, BARLOW and cocor experiments for FullRot. Submit the paper to PR
2) U-Net for MixupMask
3) Directly segmentation pretext, fully unsupervised segmentation crop region with random ratio and angle
4) Can set loss monitor to accumulate different loss by scale, not working, maybe adaptive loss, check multi-loss works
5) Model does not know which rotated image is the background
What is the right way to gradually unfreeze layers in neural network while learning with tensorflow?
本博客所有文章除特别声明外，均采用 CC BY-SA 4.0 协议 ，转载请注明出处！