PDA

View Full Version : Fedora 26, FakeRAID, unrecognised m.2 SSD



nrdpxl
23rd September 2017, 08:50 PM
I’m trying to install fedora workstation on 2xm.2 in raid0 made with intel RST and i’m running into issues.

First one is that after creating installation usb and loading system it halts.
Another option (bit better) was to create bootable usb from fedora image, boot UEFI, and it loads the system, but after trying to “install to hdd” setup doesn’t show ssd at all.

I tried to switch off the raid too and had no luck to find ssd.

somewhere found suggestion to use mdadm --assembly --scan but this one
gave nothing too as it didn’t find anything.

All in all i’m not very experienced in linux as i’m coming from osx, but would to switch to linux instead of using windows. Can anyone guide me and share some information what steps are necessary to make this raid0 configuration visible, to be able to write fedora to disk?

amiga
3rd October 2017, 01:39 AM
I’m trying to install fedora workstation on 2xm.2 in raid0 made with intel RST and i’m running into issues.
All in all i’m not very experienced in linux as i’m coming from osx, but would to switch to linux instead of using windows.

What you are attempting is a challenging install even for experienced Linux users. It isn't something a beginner should try for their first install.

First a clarification is needed. Are these M.2 form factor PCIe SSDs, possibly with NMVe ? I noticed you don't capitalize anything even references to yourself ( i'm as opposed to I'm ). The proper name is capitalized M.2.

Please provide the model numbers of these SSDs.

If these are NMVe then that could be why the installer doesn't recognize them immediately.

Also if these are NMVe is there a reason you need two of these in parallel ? Is the speed of one not fast enough ? One would be 3-4 times faster than a SATA SSD.

If you have two then do a regular non-RAID Fedora install to one and use the second as a data drive.


somewhere found suggestion to use mdadm --assembly --scan but this one
gave nothing too as it didn’t find anything.

This is the command to use if you have already created a mdadm RAID array and want to activate it. As you haven't created anything yet of course it won't find anything.

nrdpxl
3rd October 2017, 07:30 AM
Thank you for advice

These are two Samsung 960 EVO, they are M.2 NVMe.
And yes there is a reason why I'm I'm using them in RAID0 (strip) configuration, I'm working with lot's of video files and need to read and write them to the disk. In this configuration my read and write speed are nearly identical.

Anyway, I think I will buy another one M.2 just for Fedora, As far as I read, there are lot's of people having same issues as me.

nrdpxl
19th October 2017, 11:57 AM
I thought I will buy SATA SSD, for linux, but somehow after installing new SSD I can't even boot Live Fedora USB ;(

amiga
20th October 2017, 06:26 PM
Have you read the Intel paper describing how Intel RST can be used with NVMe SSDs by Linux mdadm RAID ? It should be your starting point.

https://www.intel.com/content/dam/support/us/en/documents/solid-state-drives/Quick_Start_RSTe_NVMe_for%20Linux.pdf


I thought I will buy SATA SSD, for linux, but somehow after installing new SSD I can't even boot Live Fedora USB ;(

What drive is set for first boot priority in your firmware ? If it is set to the SATA SSD and the SSD is unformatted that may be the problem. You need to set the boot priority to the live USB.

nrdpxl
20th October 2017, 09:47 PM
It set to boot from USB.

So, before installing SATA SSD, I was able to boot no problem, but issue was that there was no disk found later. So couldn't proceed install.

After installing new SATA SSD, I'm unable to boot at all (load live-USB)
and the message I get is:

[1.457557] Error parsing PCC subspaces ftom PCCT
[1.500924] APCI Error: [SDS0] Namespace lookup failure, AE_NOT_FOUND (20170119/psargs-363)
[1.500929] APCI Error: [SDS0] Method parse/execution failed [\SHAD._STA] (Node ffff8e121e136730), AE_NOT_FOUND (20170119/psargs-543)
[1.500941] APCI Error: [SDS0] Namespace lookup failure, AE_NOT_FOUND (20170119/psargs-363)
[1.500944] APCI Error: [SDS0] Method parse/execution failed [\SHAD._STA] (Node ffff8e121e136730), AE_NOT_FOUND (20170119/psargs-543)
[1.521462] APCI Error: [SDS0] Namespace lookup failure, AE_NOT_FOUND (20170119/psargs-363)
[1.521544] APCI Error: [SDS0] Method parse/execution failed [\SHAD._STA] (Node ffff8e121e136730), AE_NOT_FOUND (20170119/psargs-543)
[1.572384] APCI Error: [SDS0] Namespace lookup failure, AE_NOT_FOUND (20170119/psargs-363)
[1.572388] APCI Error: [SDS0] Method parse/execution failed [\SHAD._STA] (Node ffff8e121e136730), AE_NOT_FOUND (20170119/psargs-543)

and it stops.

amiga
21st October 2017, 10:20 AM
Since you are having APCI errors you should disable APCI at the kernel level by adding apci=off as a kernel parameter of the live USB bootloader. If grub is used typing e usually allows you to edit the boot stanza.

Did you download and read the Intel paper ? It is exactly what you need.

nrdpxl
21st October 2017, 12:49 PM
thanks for your help! I was able to boot to the live-usb, but as i found there still no disks shows in installer or gpart, only sba which is USB-Flash. I think it means no linux to me. It's really sad that so simple setups are so complicated.

amiga
22nd October 2017, 02:05 AM
I was able to boot to the live-usb, but as i found there still no disks shows in installer or gpart, only sba which is USB-Flash. I

Do you mean GParted and sda ?

If the SSD is unformatted it may not show up in the installer. Is the new SSD viewable by UEFI firmware ?

Please run the following and output the results.


$ ls -l /dev/sd*
$ ls -l /dev/disk/by-id/*
$ ls -l /dev/disk/by-path/*
$ sudo fdisk -l

nrdpxl
22nd October 2017, 05:08 PM
Yes, Gparted doesn't show any of disks besides usb. Here is what I get in terminal:



[liveuser@Unknown-30-9c-23-1c-b3-36-2 ~]$ ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Oct 22 2017 /dev/sda
brw-rw----. 1 root disk 8, 1 Oct 22 2017 /dev/sda1
brw-rw----. 1 root disk 8, 2 Oct 22 2017 /dev/sda2
brw-rw----. 1 root disk 8, 3 Oct 22 2017 /dev/sda3




[liveuser@Unknown-30-9c-23-1c-b3-36-2 ~]$ ls -l /dev/disk/by-id/*
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-id/dm-name-live-base -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-id/dm-name-live-rw -> ../../dm-0
lrwxrwxrwx. 1 root root 9 Oct 22 2017 /dev/disk/by-id/usb-PNY_USB_3.0_FD_070D5A7BAEE55573-0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-id/usb-PNY_USB_3.0_FD_070D5A7BAEE55573-0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-id/usb-PNY_USB_3.0_FD_070D5A7BAEE55573-0:0-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-id/usb-PNY_USB_3.0_FD_070D5A7BAEE55573-0:0-part3 -> ../../sda3




[liveuser@Unknown-30-9c-23-1c-b3-36-2 ~]$ ls -l /dev/disk/by-path/*
lrwxrwxrwx. 1 root root 9 Oct 22 2017 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3:1.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3:1.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3:1.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Oct 22 2017 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3:1.0-scsi-0:0:0:0-part3 -> ../../sda3




[liveuser@Unknown-30-9c-23-1c-b3-36-2 ~]$ sudo fdisk -l
Disk /dev/sda: 28.9 GiB, 31004295168 bytes, 60555264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0975cb23

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 0 3053567 3053568 1.5G 0 Empty
/dev/sda2 104772 117775 13004 6.4M ef EFI (FAT-12/16/32)
/dev/sda3 117776 145855 28080 13.7M 0 Empty


Disk /dev/loop0: 1.4 GiB, 1482072064 bytes, 2894672 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 6.5 GiB, 6981419008 bytes, 13635584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 512 MiB, 536870912 bytes, 1048576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/live-rw: 6.5 GiB, 6981419008 bytes, 13635584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/live-base: 6.5 GiB, 6981419008 bytes, 13635584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[liveuser@Unknown-30-9c-23-1c-b3-36-2 ~]$


So I'm unsure why it's impossible to find my other SSD, as on windows OS they work just fine, and from my previous experiences with linux I was able to load
drives and format them with Gparted and install linux on them with no problems. As I mentioned previously there is raid0 with 2xm.2 drives, but now there is a single SATA SSD on the system too, and I was hoping that OS will find it and will be able to use it but sadly, with no luck. Just to mention, had no luck with ubuntu, mint distros too, so maybe there is some problem with my PC or lack of knowledge.

Thanks for your help once again.

amiga
23rd October 2017, 08:50 AM
I just noticed that you never mentioned anything about your hardware such as the chipset version of your motherboard. Is the computer brand new and latest generation ?

Please post the output of lspci. This will list all of the controllers on your MB such as the SATA controllers.

Also did you look at the kernel messages during boot to see if the SSD is detected. You can use dmesg or journalctl. The following is part of the ouput when my main SSD is detected.


Oct 22 20:35:10 server47.localdomain smartd[1077]: Device: /dev/sdb [SAT], opened
Oct 22 20:35:10 server47.localdomain smartd[1077]: Device: /dev/sdb [SAT], Samsung SSD 850 EVO 250GB, S/N:S2R5NX0H524929J, WWN:5-002538-d40e18c38, FW:EMT02B6Q, 250 GB

nrdpxl
24th October 2017, 02:17 PM
this is what I get somthing like that: (I had to shorten this due to too much symbols)



[liveuser@Unknown-30-9c-23-1c-b3-36 ~]$ lspci
00:00.0 Host bridge: Intel Corporation Device 2020 (rev 04)
00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
00:05.0 System peripheral: Intel Corporation Sky Lake-E MM/Vt-d Configuration Registers (rev 04)
00:05.2 System peripheral: Intel Corporation Device 2025 (rev 04)
00:05.4 PIC: Intel Corporation Device 2026 (rev 04)
00:08.0 System peripheral: Intel Corporation Sky Lake-E Ubox Registers (rev 04)
00:08.1 Performance counters: Intel Corporation Sky Lake-E Ubox Registers (rev 04)
00:08.2 System peripheral: Intel Corporation Sky Lake-E Ubox Registers (rev 04)
00:14.0 USB controller: Intel Corporation 200 Series PCH USB 3.0 xHCI Controller
00:14.2 Signal processing controller: Intel Corporation 200 Series PCH Thermal Subsystem
00:16.0 Communication controller: Intel Corporation 200 Series PCH CSME HECI #1
00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode]
00:1c.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #1 (rev f0)
00:1c.2 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #3 (rev f0)
00:1c.5 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #6 (rev f0)
00:1f.0 ISA bridge: Intel Corporation Device a2d2
00:1f.2 Memory controller: Intel Corporation 200 Series PCH PMC
00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio
00:1f.4 SMBus: Intel Corporation 200 Series PCH SMBus Controller
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V
01:00.0 USB controller: ASMedia Technology Inc. Device 2142
02:00.0 USB controller: ASMedia Technology Inc. Device 2142
03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
16:00.0 PCI bridge: Intel Corporation Sky Lake-E PCI Express Root Port A (rev 04)
16:05.0 System peripheral: Intel Corporation Device 2034 (rev 04)
16:05.2 System peripheral: Intel Corporation Sky Lake-E RAS Configuration Registers (rev 04)
16:05.4 PIC: Intel Corporation Device 2036 (rev 04)
16:08.0 System peripheral: Intel Corporation Sky Lake-E CHA Registers (rev 04)
17:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
17:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
64:00.0 PCI bridge: Intel Corporation Sky Lake-E PCI Express Root Port A (rev 04)
64:05.0 System peripheral: Intel Corporation Device 2034 (rev 04)
64:05.2 System peripheral: Intel Corporation Sky Lake-E RAS Configuration Registers (rev 04)
64:05.4 PIC: Intel Corporation Device 2036 (rev 04)
64:08.0 System peripheral: Intel Corporation Device 2066 (rev 04)
65:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
65:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
b2:05.0 System peripheral: Intel Corporation Device 2034 (rev 04)
b2:05.2 System peripheral: Intel Corporation Sky Lake-E RAS Configuration Registers (rev 04)
b2:05.4 PIC: Intel Corporation Device 2036 (rev 04)
b2:12.0 Performance counters: Intel Corporation Sky Lake-E M3KTI Registers (rev 04)
b2:12.1 Performance counters: Intel Corporation Sky Lake-E M3KTI Registers (rev 04)
b2:12.2 System peripheral: Intel Corporation Sky Lake-E M3KTI Registers (rev 04)
b2:15.0 System peripheral: Intel Corporation Sky Lake-E M2PCI Registers (rev 04)
b2:17.0 System peripheral: Intel Corporation Sky Lake-E M2PCI Registers (rev 04)



I think this is what I get according my disks:



[ 4.507164] Linux agpgart interface v0.103
[ 4.508742] ahci 0000:00:17.0: version 3.0
[ 4.508820] ahci 0000:00:17.0: Found 2 remapped NVMe devices.
[ 4.508821] ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.
[ 4.508825] ahci 0000:00:17.0: controller can't do SNTF, turning off CAP_SNTF
[ 4.509035] ahci 0000:00:17.0: AHCI 0001.0301 32 slots 4 ports 6 Gbps 0xf impl RAID mode
[ 4.509036] ahci 0000:00:17.0: flags: 64bit ncq led clo only pio slum part deso sadm sds apst
[ 4.509040] ahci 0000:00:17.0: both AHCI_HFLAG_MULTI_MSI flag set and custom irq handler implemented
[ 4.509459] scsi host0: ahci
[ 4.509596] scsi host1: ahci
[ 4.509727] scsi host2: ahci
[ 4.509853] scsi host3: ahci
[ 4.509870] ata1: SATA max UDMA/133 abar m524288@0x92e00000 port 0x92e00100 irq 31
[ 4.509874] ata2: SATA max UDMA/133 abar m524288@0x92e00000 port 0x92e00180 irq 32
[ 4.509875] ata3: SATA max UDMA/133 abar m524288@0x92e00000 port 0x92e00200 irq 33
[ 4.509876] ata4: SATA max UDMA/133 abar m524288@0x92e00000 port 0x92e00280 irq 34
[ 4.509985] libphy: Fixed MDIO Bus: probed


and some red parts marked:



[ 56.819582] EDAC skx: ECC is disabled on imc 0
[ 56.878999] nouveau 0000:17:00.0: priv: HUB0: 00d054 00000007 (1b408216)
[ 56.880809] nouveau 0000:17:00.0: DRM: Pointer to flat panel table invalid
[ 57.203214] nouveau 0000:17:00.0: DRM: failed to create kernel channel, -22
[ 57.442017] nouveau 0000:65:00.0: priv: HUB0: 00d054 00000007 (18408216)
[ 57.443751] nouveau 0000:65:00.0: DRM: Pointer to flat panel table invalid
[ 57.746253] nouveau 0000:65:00.0: DRM: failed to create kernel channel, -22


I'm suing MSI X299 Chipset SLI Plus motherboard with Intel 7920X CPU.

amiga
24th October 2017, 06:45 PM
I think I have found the problem. All of your SATA controlllers are part of a RAID bus controller instead of being normal SATA controllers as I have. Since this controller is in RAID mode Linux can't see any SATA drives.


00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode]

[ 4.508742] ahci 0000:00:17.0: version 3.0
[ 4.508820] ahci 0000:00:17.0: Found 2 remapped NVMe devices.
[ 4.508821] ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.
[ 4.508825] ahci 0000:00:17.0: controller can't do SNTF, turning off CAP_SNTF
[ 4.509035] ahci 0000:00:17.0: AHCI 0001.0301 32 slots 4 ports 6 Gbps 0xf impl RAID mode

As I have underlined the 4 port 6 Gbps controller is in RAID mode. Linux can't see attached drives in this mode.

In contrast my 2011 Z68 Gigabyte board has two separate normal SATA controllers.


$ lspci | grep -i sata
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
08:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9172 SATA 6Gb/s Controller (rev 11)

You need to go into your firmware and switch this controller to AHCI mode so you can see the new SSD and possibly the two NVMe drives as well.

I found this archived post from another forum that describes the problems you have been having.

https://www.reddit.com/r/linuxquestions/comments/67qdzu/nvme_pcie_ssd_raid_or_ahci_mode_for_linuxubuntu/


I have a newer xps 15, with a similar ssd. My experience: if it's in RAID, gparted/Linux cannot see your hard drive at all. It works fine in ahci.

nrdpxl
24th October 2017, 07:15 PM
But my main concern is if I switch raid off to AHCI mode, I will loose all information in raid0. As I'm using this partition with windows I think it's to risky to try.

Other thing, to setup raid i have to do it from BIOS so as I understand it's not software level raid but goes thru some of the intel chipset. Durring windows installation I had to load driver from usb stick to make volume visible to installer. For what I know for now, this dmraid or mdraid or mdadm should be able to make my disk visable to Fedora installer, but just I don't know how, as in all the guides says to enable mdraid=true and volume itself should be in /dev/md but somehow this doesnt work for me ;/

amiga
28th October 2017, 09:01 PM
I found an older but longer paper that describes how to share an RST array between Windows and Linux with mdraid. It however doesn't mention anything about RAID and AHCI modes in the BIOS.

https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/rst-linux-paper.pdf

On page 8 it mentions this:

Various Linux installers (Anaconda* in RedHat*, YaST* in SLES) have a specialized
module that assembles RAIDs volumes using mdadm.


But my main concern is if I switch raid off to AHCI mode, I will lose all information in raid0. As I'm using this partition with windows I think it's to risky to try.

If you don't write anything at all to the RAID array you won't lose any data. You could do a test where you switch to AHCI temporarily, boot the live USB, and see if the drives are visible. A live distribution won't touch any drives unless you tell it to. You could then switch back to RAID and boot into Windows.

If Linux sees the drives in AHCI mode you could then decide how to proceed.

nrdpxl
14th November 2017, 08:54 PM
One step closer! With release of Fedora 27, I was able to load live disk and see other sata drives connected!


Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CCBABB88-2024-4BFF-BC98-DC35DD71CD19

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1953521663 1953257472 931.4G Microsoft basic data


Disk /dev/sdb: 30 GiB, 32262586368 bytes, 63012864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2563bf1a

Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 3186687 3186688 1.5G 0 Empty
/dev/sdb2 113104 131123 18020 8.8M ef EFI (FAT-12/16/32)
/dev/sdb3 131124 169251 38128 18.6M 0 Empty


Disk /dev/loop0: 1.4 GiB, 1535807488 bytes, 2999624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 6.5 GiB, 6981419008 bytes, 13635584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/live-rw: 6.5 GiB, 6981419008 bytes, 13635584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/live-base: 6.5 GiB, 6981419008 bytes, 13635584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


will try to install there and will see how it will go from there!
Once again thank you for your help!!!!

amiga
20th November 2017, 11:22 PM
One step closer! With release of Fedora 27, I was able to load live disk and see other sata drives connected!

This is good but I have two large questions. You have a RAID0 and you mentioned that you added a new SATA SSD just for Linux. You are apparently finding only one large 932 GiB drive sda which has Windows and 30 GiB sdb which looks to be your USB drive.

Q1: What has happened to the new SATA SSD you installed ?

Q2: Is F27 somehow detecting the RAID0 NVMe array transparently as a single drive as it would with true hardware RAID ? With fakeRAID both NVMe drives would be seen as sda and sdb and you would access the array with a /dev/mapper entry. Or is F27 detecting only one of the drives ?

To test this you could try reading from sda from the live USB by mounting sda as read only. If you can read directories and files then it must be accessing both drives in the striped array as reading from only one drive of a striped array would have only half the information.

nrdpxl
21st November 2017, 11:27 AM
ok i will try to explain more clearly.

My setup was:

-raid0 (windows)
--m.2 1 250GB
--m.2-2 250GB

-Sata SSD 1TB (Documents)

all drives run thru Intel RST, so they were not recognised at all. except my usb live disk.
in Fedora 27, my SATA SSD 1TB appeared, so I added another m.2 drive to install Fedora 27. so now my setup is:

-raid0 (windows)
--m.2 1 250GB
--m.2-2 250GB

--m.2 250GB (fedora 27)

-Sata SSD 1TB (Documents)

so answer to question2 is sadly, i'm still unable to see raid0, but it's now detecting other drives in my system.
also, to boot linux I have choose F11 in bios to enter boot options as there is no way to install grub in raid0. It bit a hassle but it's fine.

bobx001
21st November 2017, 10:15 PM
Can you try an FC25 instead of FC27 ? I just had a problem with the Disk Config in FC27, so I reverted to installing FC25, and then upgrading with "fedup".

Also, could you try setting your BIOS into "Legacy boot" mode ? ...just a thought. It helped me long ago when trying to install on a laptop over a hardware raid 1.

amiga
22nd November 2017, 07:21 PM
Can you try an FC25 instead of FC27 ? I just had a problem with the Disk Config in FC27, so I reverted to installing FC25, and then upgrading with "fedup".

If you had read the whole thread including the last post the OP has now successfully installed F27 on a new M.2 NVMe SSD. It didn't work at all with F26. Also fedup is obsolete and has not been used since F21. The new method is dnf system upgrade. Therefore you couldn't upgrade from F25 using fedup as it is no longer used.

https://fedoraproject.org/wiki/DNF_system_upgrade

By the way what are FC25 and FC27 ? Fedora Core ? It has been just Fedora since Fedora 7 released in May 2007. Therefore your terminology is over 10 years out of date.


Also, could you try setting your BIOS into "Legacy boot" mode ? ...just a thought. It helped me long ago when trying to install on a laptop over a hardware raid 1.

This won't work for the OP as they would have to re-install Windows into Legacy CSM mode which may not be possible.

Also fakeRAID is not real hardware RAID.

nrdpxl
25th November 2017, 02:35 PM
i can confirm legacy bios mode haven´t solved any issues.
anyway can´t manage to find my m.2 raid0 windows installation and it´s no big deal. Just sadly can´t have a grub OS selector when the system loads up so have to make it manually with F11.
as I understand it should be written in the raid0 partition?

I thought will make Fedora boot 1st but, this doesn´t work either as raid0 doesn´t show up as I mentioned before.
Anyway this works fine for now, but probably if i´ll get in love with linux should loose the raid. :)

Kobuck
25th November 2017, 03:01 PM
By the way what are FC25 and FC27 ? Fedora Core ? It has been just Fedora since Fedora 7 released in May 2007. Therefore your terminology is over 10 years out of date.


The terminology reflects the current naming conventions for Fedora release's.

Current version is:

$ hostnamectl
Static hostname: my_system_name
Icon name: computer-desktop
Chassis: desktop
Machine ID: ...........................................
Boot ID: ...........................................
Operating System: Fedora 27 (Workstation Edition)
CPE OS Name: cpe:/o:fedoraproject:fedora:27
Kernel: Linux 4.13.13-300.fc27.x86_64
Architecture: x86-64

amiga
25th November 2017, 06:09 PM
The terminology reflects the current naming conventions for Fedora release's.

Operating System: Fedora 27 (Workstation Edition)
CPE OS Name: cpe:/o:fedoraproject:fedora:27

Except for the lowercase fc in the kernel line uppercase FC does not appear in the OS name. The fc in the kernel name is used for historical reasons for continuity. It was 'fc' for Fedora Core 1-6 and 'f' by itself would be too short so for continuity fc was kept. However it has been Fedora since version 7. Virtually no one calls the OS itself FC. People use the shortcuts F25, F26, F27. You see it all the time in this forum.

https://fedoraproject.org/wiki/FAQ#What_happened_to_.27Fedora_Core.3F.27


What happened to 'Fedora Core?'

Fedora Project originally had two repositories, Fedora Core and Fedora Extras. Packages in Fedora Core repository was maintained by Red Hat developers and was part of official Fedora media and Fedora Extras was a add-on online repository open to volunteers. As the community around Fedora project grew, technical advantages (packaging guidelines, package review process in Fedora Extras) and limitations (packages in Fedora Core cannot depend on Fedora Extras but only vice versa, Fedora Extras packages were not included in official media) and other non technical advantages (Red Hat developers and community volunteers can work together in a team, no perception of any repository being treated as second class etc) led to a merge of Fedora Core and Fedora Extras into a single repository just before the release of Fedora 7. Since the repositories have merged, Fedora Core is no more. The releases are simply called Fedora.

Why does the tag in my packages say "fc"?

"fc" used to denote "Fedora Core. However after the Fedora 7 release, it has been retroactively renamed to "Fedora Collection" along the lines of GCC which is now "GNU Compiler Collection". From the perspective of version comparisons in RPM, it was easier to adopt a new meaning for c rather than dropping it simply because it refers to the old Core appellation.

Therefore the OS is now named Fedora but fc remains in the package tags for continuity. Most people would contract Fedora 27 to F27.